Matott, L Shawn; Jiang, Zhengzheng; Rabideau, Alan J; Allen-King, Richelle M
2015-01-01
Numerous isotherm expressions have been developed for describing sorption of hydrophobic organic compounds (HOCs), including "dual-mode" approaches that combine nonlinear behavior with a linear partitioning component. Choosing among these alternative expressions for describing a given dataset is an important task that can significantly influence subsequent transport modeling and/or mechanistic interpretation. In this study, a series of numerical experiments were undertaken to identify "best-in-class" isotherms by refitting 10 alternative models to a suite of 13 previously published literature datasets. The corrected Akaike Information Criterion (AICc) was used for ranking these alternative fits and distinguishing between plausible and implausible isotherms for each dataset. The occurrence of multiple plausible isotherms was inversely correlated with dataset "richness", such that datasets with fewer observations and/or a narrow range of aqueous concentrations resulted in a greater number of plausible isotherms. Overall, only the Polanyi-partition dual-mode isotherm was classified as "plausible" across all 13 of the considered datasets, indicating substantial statistical support consistent with current advances in sorption theory. However, these findings are predicated on the use of the AICc measure as an unbiased ranking metric and the adoption of a subjective, but defensible, threshold for separating plausible and implausible isotherms. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Lombardi, Doug; Bickel, Elliot S.; Bailey, Janelle M.; Burrell, Shondricka
2018-01-01
Evaluation is an important aspect of science and is receiving increasing attention in science education. The present study investigated (1) changes to plausibility judgments and knowledge as a result of a series of instructional scaffolds, called model-evidence link activities, that facilitated evaluation of scientific and alternative models in…
NASA Astrophysics Data System (ADS)
Lombardi, D.; Sinatra, G. M.
2013-12-01
Critical evaluation and plausibility reappraisal of scientific explanations have been underemphasized in many science classrooms (NRC, 2012). Deep science learning demands that students increase their ability to critically evaluate the quality of scientific knowledge, weigh alternative explanations, and explicitly reappraise their plausibility judgments. Therefore, this lack of instruction about critical evaluation and plausibility reappraisal has, in part, contributed to diminished understanding about complex and controversial topics, such as global climate change. The Model-Evidence Link (MEL) diagram (originally developed by researchers at Rutgers University under an NSF-supported project; Chinn & Buckland, 2012) is an instructional scaffold that promotes students to critically evaluate alternative explanations. We recently developed a climate change MEL and found that the students who used the MEL experienced a significant shift in their plausibility judgments toward the scientifically accepted model of human-induced climate change. Using the MEL for instruction also resulted in conceptual change about the causes of global warming that reflected greater understanding of fundamental scientific principles. Furthermore, students sustained this conceptual change six months after MEL instruction (Lombardi, Sinatra, & Nussbaum, 2013). This presentation will discuss recent educational research that supports use of the MEL to promote critical evaluation, plausibility reappraisal, and conceptual change, and also, how the MEL may be particularly effective for learning about global climate change and other socio-scientific topics. Such instruction to develop these fundamental thinking skills (e.g., critical evaluation and plausibility reappraisal) is demanded by both the Next Generation Science Standards (Achieve, 2013) and the Common Core State Standards for English Language Arts and Mathematics (CCSS Initiative-ELA, 2010; CCSS Initiative-Math, 2010), as well as a society that is equipped to deal with challenges in a way that is beneficial to our national and global community.
What happened (and what didn’t): Discourse constraints on encoding of plausible alternatives
Fraundorf, Scott H.; Benjamin, Aaron S.; Watson, Duane G.
2013-01-01
Three experiments investigated how font emphasis influences reading and remembering discourse. Although past work suggests that contrastive pitch contours benefit memory by promoting encoding of salient alternatives, it is unclear both whether this effect generalizes to other forms of linguistic prominence and how the set of alternatives is constrained. Participants read discourses in which some true propositions had salient alternatives (e.g., British scientists found the endangered monkey when the discourse also mentioned French scientists) and completed a recognition memory test. In Experiments 1 and 2, font emphasis in the initial presentation increased participants’ ability to later reject false statements about salient alternatives but not about unmentioned items (e.g., Portuguese scientists). In Experiment 3, font emphasis helped reject false statements about plausible alternatives, but not about less plausible alternatives that were nevertheless established in the discourse. These results suggest readers encode a narrow set of only those alternatives plausible in the particular discourse. They also indicate that multiple manipulations of linguistic prominence, not just prosody, can lead to consideration of alternatives. PMID:24014934
Families of Plausible Solutions to the Puzzle of Boyajian’s Star
NASA Astrophysics Data System (ADS)
Wright, Jason T.; Sigurdsson, Steinn
2016-09-01
Good explanations for the unusual light curve of Boyajian's Star have been hard to find. Recent results by Montet & Simon lend strength and plausibility to the conclusion of Schaefer that in addition to short-term dimmings, the star also experiences large, secular decreases in brightness on decadal timescales. This, combined with a lack of long-wavelength excess in the star's spectral energy distribution, strongly constrains scenarios involving circumstellar material, including hypotheses invoking a spherical cloud of artifacts. We show that the timings of the deepest dimmings appear consistent with being randomly distributed, and that the star's reddening and narrow sodium absorption is consistent with the total, long-term dimming observed. Following Montet & Simon's encouragement to generate alternative hypotheses, we attempt to circumscribe the space of possible explanations with a range of plausibilities, including: a cloud in the outer solar system, structure in the interstellar medium (ISM), natural and artificial material orbiting Boyajian's Star, an intervening object with a large disk, and variations in Boyajian's Star itself. We find the ISM and intervening disk models more plausible than the other natural models.
Exemplar-Based Clustering via Simulated Annealing
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2009-01-01
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of "exemplars" as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed…
Alternative supply specifications and estimates of regional supply and demand for stumpage.
Kent P. Connaughton; David H. Jackson; Gerard A. Majerus
1988-01-01
Four plausible sets of stumpage supply and demand equations were developed and estimated; the demand equation was the same for each set, although the supply equation differed. The supply specifications varied from the model of regional excess demand in which National Forest harvest levels were assumed fixed to a more realistic model in which the harvest on the National...
Comparison of Damping Mechanisms for Transverse Waves in Solar Coronal Loops
NASA Astrophysics Data System (ADS)
Montes-Solís, María; Arregui, Iñigo
2017-09-01
We present a method to assess the plausibility of alternative mechanisms to explain the damping of magnetohydrodynamic transverse waves in solar coronal loops. The considered mechanisms are resonant absorption of kink waves in the Alfvén continuum, phase mixing of Alfvén waves, and wave leakage. Our methods make use of Bayesian inference and model comparison techniques. We first infer the values for the physical parameters that control the wave damping, under the assumption of a particular mechanism, for typically observed damping timescales. Then, the computation of marginal likelihoods and Bayes factors enable us to quantify the relative plausibility between the alternative mechanisms. We find that, in general, the evidence is not large enough to support a single particular damping mechanism as the most plausible one. Resonant absorption and wave leakage offer the most probable explanations in strong damping regimes, while phase mixing is the best candidate for weak/moderate damping. When applied to a selection of 89 observed transverse loop oscillations, with their corresponding measurements of damping timescales and taking into account data uncertainties, we find that positive evidence for a given damping mechanism is only available in a few cases.
Cure models for the analysis of time-to-event data in cancer studies.
Jia, Xiaoyu; Sima, Camelia S; Brennan, Murray F; Panageas, Katherine S
2013-11-01
In settings when it is biologically plausible that some patients are cured after definitive treatment, cure models present an alternative to conventional survival analysis. Cure models can inform on the group of patients cured, by estimating the probability of cure, and identifying factors that influence it; while simultaneously focusing on time to recurrence and associated factors for the remaining patients. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lombardi, D.
2011-12-01
Plausibility judgments-although well represented in conceptual change theories (see, for example, Chi, 2005; diSessa, 1993; Dole & Sinatra, 1998; Posner et al., 1982)-have received little empirical attention until our recent work investigating teachers' and students' understanding of and perceptions about human-induced climate change (Lombardi & Sinatra, 2010, 2011). In our first study with undergraduate students, we found that greater plausibility perceptions of human-induced climate accounted for significantly greater understanding of weather and climate distinctions after instruction, even after accounting for students' prior knowledge (Lombardi & Sinatra, 2010). In a follow-up study with inservice science and preservice elementary teachers, we showed that anger about the topic of climate change and teaching about climate change was significantly related to implausible perceptions about human-induced climate change (Lombardi & Sinatra, 2011). Results from our recent studies helped to inform our development of a model of the role of plausibility judgments in conceptual change situations. The model applies to situations involving cognitive dissonance, where background knowledge conflicts with an incoming message. In such situations, we define plausibility as a judgment on the relative potential truthfulness of incoming information compared to one's existing mental representations (Rescher, 1976). Students may not consciously think when making plausibility judgments, expending only minimal mental effort in what is referred to as an automatic cognitive process (Stanovich, 2009). However, well-designed instruction could facilitate students' reappraisal of plausibility judgments in more effortful and conscious cognitive processing. Critical evaluation specifically may be one effective method to promote plausibility reappraisal in a classroom setting (Lombardi & Sinatra, in progress). In science education, critical evaluation involves the analysis of how evidentiary data support a hypothesis and its alternatives. The presentation will focus on how instruction promoting critical evaluation can encourage individuals to reappraise their plausibility judgments and initiate knowledge reconstruction. In a recent pilot study, teachers experienced an instructional scaffold promoting critical evaluation of two competing climate change theories (i.e., human-induced and increasing solar irradiance) and significantly changed both their plausibility judgments and perceptions of correctness toward the scientifically-accepted model of human-induced climate change. A comparison group of teachers who did not experience the critical evaluation activity showed no significant change. The implications of these studies for future research and instruction will be discussed in the presentation, including effective ways to increase students' and teachers' ability to be critically evaluative and reappraise their plausibility judgments. With controversial science issues, such as climate change, such abilities may be necessary to facilitate conceptual change.
Macroecological analyses support an overkill scenario for late Pleistocene extinctions.
Diniz-Filho, J A F
2004-08-01
The extinction of megafauna at the end of Pleistocene has been traditionally explained by environmental changes or overexploitation by human hunting (overkill). Despite difficulties in choosing between these alternative (and not mutually exclusive) scenarios, the plausibility of the overkill hypothesis can be established by ecological models of predator-prey interactions. In this paper, I have developed a macroecological model for the overkill hypothesis, in which prey population dynamic parameters, including abundance, geographic extent, and food supply for hunters, were derived from empirical allometric relationships with body mass. The last output correctly predicts the final destiny (survival or extinction) for 73% of the species considered, a value only slightly smaller than those obtained by more complex models based on detailed archaeological and ecological data for each species. This illustrates the high selectivity of Pleistocene extinction in relation to body mass and confers more plausibility on the overkill scenario.
To provide useful alternatives to in vivo animal studies, in vitro assays for dose-response assessments of xenobiotic chemicals must use concentrations in media and target tissues that are within biologically-plausible limits. Determining these concentrations is a complex matter,...
Rohrmeier, Martin A; Cross, Ian
2014-07-01
Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies. Copyright © 2014 Elsevier Inc. All rights reserved.
MAISTAS: a tool for automatic structural evaluation of alternative splicing products.
Floris, Matteo; Raimondo, Domenico; Leoni, Guido; Orsini, Massimiliano; Marcatili, Paolo; Tramontano, Anna
2011-06-15
Analysis of the human genome revealed that the amount of transcribed sequence is an order of magnitude greater than the number of predicted and well-characterized genes. A sizeable fraction of these transcripts is related to alternatively spliced forms of known protein coding genes. Inspection of the alternatively spliced transcripts identified in the pilot phase of the ENCODE project has clearly shown that often their structure might substantially differ from that of other isoforms of the same gene, and therefore that they might perform unrelated functions, or that they might even not correspond to a functional protein. Identifying these cases is obviously relevant for the functional assignment of gene products and for the interpretation of the effect of variations in the corresponding proteins. Here we describe a publicly available tool that, given a gene or a protein, retrieves and analyses all its annotated isoforms, provides users with three-dimensional models of the isoform(s) of his/her interest whenever possible and automatically assesses whether homology derived structural models correspond to plausible structures. This information is clearly relevant. When the homology model of some isoforms of a gene does not seem structurally plausible, the implications are that either they assume a structure unrelated to that of the other isoforms of the same gene with presumably significant functional differences, or do not correspond to functional products. We provide indications that the second hypothesis is likely to be true for a substantial fraction of the cases. http://maistas.bioinformatica.crs4.it/.
ERIC Educational Resources Information Center
Kennelly, Brendan; Flannery, Darragh; Considine, John; Doherty, Edel; Hynes, Stephen
2014-01-01
This paper outlines how a discrete choice experiment (DCE) can be used to learn more about how students are willing to trade off various features of assignments such as the nature and timing of feedback and the method used to submit assignments. A DCE identifies plausible levels of the key attributes of a good or service and then presents the…
NASA Astrophysics Data System (ADS)
Golovin, Y.; Golovin, D.; Klyachko, N.; Majouga, A.; Kabanov, A.
2017-02-01
Various plausible acceleration mechanisms of drug release from nanocarriers composed of a single-domain magnetic nanoparticle core with attached long macromolecule chains activated by low frequency non-heating alternating magnetic field (AMF) are discussed. The most important system characteristics affecting the AMF exposure impact are determined. Impact of several reasonable mechanisms is estimated analytically or obtained using numerical modeling. Some conditions providing manifold release acceleration as a result from exposure in AMF are found.
NASA Technical Reports Server (NTRS)
Dekorvin, Andre
1992-01-01
The Dempster-Shafer theory of evidence is applied to a multiattribute decision making problem whereby the decision maker (DM) must compromise with available alternatives, none of which exactly satisfies his ideal. The decision mechanism is constrained by the uncertainty inherent in the determination of the relative importance of each attribute element and the classification of existing alternatives. The classification of alternatives is addressed through expert evaluation of the degree to which each element is contained in each available alternative. The relative importance of each attribute element is determined through pairwise comparisons of the elements by the decision maker and implementation of a ratio scale quantification method. Then the 'belief' and 'plausibility' that an alternative will satisfy the decision maker's ideal are calculated and combined to rank order the available alternatives. Application to the problem of selecting computer software is given.
Using models to manage systems subject to sustainability indicators
Hill, M.C.
2006-01-01
Mathematical and numerical models can provide insight into sustainability indicators using relevant simulated quantities, which are referred to here as predictions. To be useful, many concerns need to be considered. Four are discussed here: (a) mathematical and numerical accuracy of the model; (b) the accuracy of the data used in model development, (c) the information observations provide to aspects of the model important to predictions of interest as measured using sensitivity analysis; and (d) the existence of plausible alternative models for a given system. The four issues are illustrated using examples from conservative and transport modelling, and using conceptual arguments. Results suggest that ignoring these issues can produce misleading conclusions.
NASA Astrophysics Data System (ADS)
Maldonado, Solvey; Findeisen, Rolf
2010-06-01
The modeling, analysis, and design of treatment therapies for bone disorders based on the paradigm of force-induced bone growth and adaptation is a challenging task. Mathematical models provide, in comparison to clinical, medical and biological approaches an structured alternative framework to understand the concurrent effects of the multiple factors involved in bone remodeling. By now, there are few mathematical models describing the appearing complex interactions. However, the resulting models are complex and difficult to analyze, due to the strong nonlinearities appearing in the equations, the wide range of variability of the states, and the uncertainties in parameters. In this work, we focus on analyzing the effects of changes in model structure and parameters/inputs variations on the overall steady state behavior using systems theoretical methods. Based on an briefly reviewed existing model that describes force-induced bone adaptation, the main objective of this work is to analyze the stationary behavior and to identify plausible treatment targets for remodeling related bone disorders. Identifying plausible targets can help in the development of optimal treatments combining both physical activity and drug-medication. Such treatments help to improve/maintain/restore bone strength, which deteriorates under bone disorder conditions, such as estrogen deficiency.
Strategy for modeling putative multilevel ecosystems on Europa.
Irwin, Louis N; Schulze-Makuch, Dirk
2003-01-01
A general strategy for modeling ecosystems on other worlds is described. Two alternative biospheres beneath the ice surface of Europa are modeled, based on analogous ecosystems on Earth in potentially comparable habitats, with reallocation of biomass quantities consistent with different sources of energy and chemical constituents. The first ecosystem models a benthic biosphere supported by chemoautotrophic producers. The second models two concentrations of biota at the top and bottom of the subsurface water column supported by energy harvested from transmembrane ionic gradients. Calculations indicate the plausibility of both ecosystems, including small macroorganisms at the highest trophic levels, with ionotrophy supporting a larger biomass than chemoautotrophy.
Predictive Inference Using Latent Variables with Covariates*
Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.
2014-01-01
Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627
Lathrop, R H; Casale, M; Tobias, D J; Marsh, J L; Thompson, L M
1998-01-01
We describe a prototype system (Poly-X) for assisting an expert user in modeling protein repeats. Poly-X reduces the large number of degrees of freedom required to specify a protein motif in complete atomic detail. The result is a small number of parameters that are easily understood by, and under the direct control of, a domain expert. The system was applied to the polyglutamine (poly-Q) repeat in the first exon of huntingtin, the gene implicated in Huntington's disease. We present four poly-Q structural motifs: two poly-Q beta-sheet motifs (parallel and antiparallel) that constitute plausible alternatives to a similar previously published poly-Q beta-sheet motif, and two novel poly-Q helix motifs (alpha-helix and pi-helix). To our knowledge, helical forms of polyglutamine have not been proposed before. The motifs suggest that there may be several plausible aggregation structures for the intranuclear inclusion bodies which have been found in diseased neurons, and may help in the effort to understand the structural basis for Huntington's disease.
On the applicability of STDP-based learning mechanisms to spiking neuron network models
NASA Astrophysics Data System (ADS)
Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.
2016-11-01
The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
Counterfactual Plausibility and Comparative Similarity.
Stanley, Matthew L; Stewart, Gregory W; Brigard, Felipe De
2017-05-01
Counterfactual thinking involves imagining hypothetical alternatives to reality. Philosopher David Lewis (1973, 1979) argued that people estimate the subjective plausibility that a counterfactual event might have occurred by comparing an imagined possible world in which the counterfactual statement is true against the current, actual world in which the counterfactual statement is false. Accordingly, counterfactuals considered to be true in possible worlds comparatively more similar to ours are judged as more plausible than counterfactuals deemed true in possible worlds comparatively less similar. Although Lewis did not originally develop his notion of comparative similarity to be investigated as a psychological construct, this study builds upon his idea to empirically investigate comparative similarity as a possible psychological strategy for evaluating the perceived plausibility of counterfactual events. More specifically, we evaluate judgments of comparative similarity between episodic memories and episodic counterfactual events as a factor influencing people's judgments of plausibility in counterfactual simulations, and we also compare it against other factors thought to influence judgments of counterfactual plausibility, such as ease of simulation and prior simulation. Our results suggest that the greater the perceived similarity between the original memory and the episodic counterfactual event, the greater the perceived plausibility that the counterfactual event might have occurred. While similarity between actual and counterfactual events, ease of imagining, and prior simulation of the counterfactual event were all significantly related to counterfactual plausibility, comparative similarity best captured the variance in ratings of counterfactual plausibility. Implications for existing theories on the determinants of counterfactual plausibility are discussed. Copyright © 2016 Cognitive Science Society, Inc.
Pathways Between Marriage and Parenting for Wives and Husbands: The Role of Coparenting1
Morrill, Melinda
2016-01-01
As family systems research has expanded, so have investigations into how marital partners coparent together. Although coparenting research has increasingly found support for the influential role of coparenting on both marital relationships and parenting practices, coparenting has traditionally been investigated as part of an indirect system which begins with marital health, is mediated by coparenting processes, and then culminates in each partner's parenting. The field has not tested how this traditional model compares to the equally plausible alternative model in which coparenting simultaneously predicts both marital relationships and parenting practices. Furthermore, statistical and practical limitations have typically resulted in only one parent being analyzed in these models. This study used model-fitting analyses to include both wives and husbands in a test of these two alternative models of the role of coparenting in the family system. Our data suggested that both the traditional indirect model (marital health to coparenting to parenting practices), and the alternative predictor model where coparenting alliance directly and simultaneously predicts marital health and parenting practices, fit for both spouses. This suggests that dynamic and multiple roles may be played by coparenting in the overall family system, and raises important practical implications for family clinicians. PMID:20377635
Bowers, Jeffrey S
2009-01-01
A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned in connectionist models and neural coding in brain and often dismiss localist (grandmother cell) theories as biologically implausible. The author reviews a range a data that strongly challenge this claim and shows that localist models provide a better account of single-cell recording studies. The author also contrast local and alternative distributed coding schemes (sparse and coarse coding) and argues that common rejection of grandmother cell theories in neuroscience is due to a misunderstanding about how localist models behave. The author concludes that the localist representations embedded in theories of perception and cognition are consistent with neuroscience; biology only calls into question the distributed representations often learned in PDP models.
Comparison of Damping Mechanisms for Transverse Waves in Solar Coronal Loops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montes-Solís, María; Arregui, Iñigo, E-mail: mmsolis@iac.es
We present a method to assess the plausibility of alternative mechanisms to explain the damping of magnetohydrodynamic transverse waves in solar coronal loops. The considered mechanisms are resonant absorption of kink waves in the Alfvén continuum, phase mixing of Alfvén waves, and wave leakage. Our methods make use of Bayesian inference and model comparison techniques. We first infer the values for the physical parameters that control the wave damping, under the assumption of a particular mechanism, for typically observed damping timescales. Then, the computation of marginal likelihoods and Bayes factors enable us to quantify the relative plausibility between the alternativemore » mechanisms. We find that, in general, the evidence is not large enough to support a single particular damping mechanism as the most plausible one. Resonant absorption and wave leakage offer the most probable explanations in strong damping regimes, while phase mixing is the best candidate for weak/moderate damping. When applied to a selection of 89 observed transverse loop oscillations, with their corresponding measurements of damping timescales and taking into account data uncertainties, we find that positive evidence for a given damping mechanism is only available in a few cases.« less
Stringent and efficient assessment of boson-sampling devices.
Tichy, Malte C; Mayer, Klaus; Buchleitner, Andreas; Mølmer, Klaus
2014-07-11
Boson sampling holds the potential to experimentally falsify the extended Church-Turing thesis. The computational hardness of boson sampling, however, complicates the certification that an experimental device yields correct results in the regime in which it outmatches classical computers. To certify a boson sampler, one needs to verify quantum predictions and rule out models that yield these predictions without true many-boson interference. We show that a semiclassical model for many-boson propagation reproduces coarse-grained observables that are proposed as witnesses of boson sampling. A test based on Fourier matrices is demonstrated to falsify physically plausible alternatives to coherent many-boson propagation.
Fault offsets and lateral crustal movement on Europa - Evidence for a mobile ice shell
NASA Technical Reports Server (NTRS)
Schenk, Paul M.; Mckinnon, William B.
1989-01-01
An examination is conducted of Europa's cross-cutting structural relationships between various lineament types, in order to constrain the type of structure involved in each such case and, where possible, to also constrain the degree of extension across the lineaments. Evidence is adduced for significant lateral crustal movement, allowing alternative models and mechanisms for lineament formation to be discussed, as well as plausible lithospheric and crustal models. The question as to whether any of the water-ice layer has been, or currently is, liquid, is also treated in light of the evidence obtained.
Biologically plausible particulate air pollution mortality concentration-response functions.
Roberts, Steven
2004-01-01
In this article I introduce an alternative method for estimating particulate air pollution mortality concentration-response functions. This method constrains the particulate air pollution mortality concentration-response function to be biologically plausible--that is, a non-decreasing function of the particulate air pollution concentration. Using time-series data from Cook County, Illinois, the proposed method yields more meaningful particulate air pollution mortality concentration-response function estimates with an increase in statistical accuracy. PMID:14998745
Hao, M; He, X; Lan, N
2012-01-01
It has been shown that normal cyclic movement of human arm and resting limb tremor in Parkinson's disease (PD) are associated with the oscillatory neuronal activities in different cerebral networks, which are transmitted to the antagonistic muscles via the same spinal pathway. There are mono-synaptic and multi-synaptic corticospinal pathways for conveying motor commands. This study investigates the plausible role of propriospinal neuronal (PN) network in the C3-C4 levels in multi-synaptic transmission of cortical commands for oscillatory movements. A PN network model is constructed based on known neurophysiological connections, and is hypothesized to achieve the conversion of cortical oscillations into alternating antagonistic muscle bursts. Simulations performed with a virtual arm (VA) model indicate that without the PN network, the alternating bursts of antagonistic muscle EMG could not be reliably generated, whereas with the PN network, the alternating pattern of bursts were naturally displayed in the three pairs of antagonist muscles. Thus, it is suggested that oscillations in the primary motor cortex (M1) of single and double tremor frequencies are processed at the PN network to compute the alternating burst pattern in the flexor and extensor muscles.
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
Negotiating plausibility: intervening in the future of nanotechnology.
Selin, Cynthia
2011-12-01
The national-level scenarios project NanoFutures focuses on the social, political, economic, and ethical implications of nanotechnology, and is initiated by the Center for Nanotechnology in Society at Arizona State University (CNS-ASU). The project involves novel methods for the development of plausible visions of nanotechnology-enabled futures, elucidates public preferences for various alternatives, and, using such preferences, helps refine future visions for research and outreach. In doing so, the NanoFutures project aims to address a central question: how to deliberate the social implications of an emergent technology whose outcomes are not known. The solution pursued by the NanoFutures project is twofold. First, NanoFutures limits speculation about the technology to plausible visions. This ambition introduces a host of concerns about the limits of prediction, the nature of plausibility, and how to establish plausibility. Second, it subjects these visions to democratic assessment by a range of stakeholders, thus raising methodological questions as to who are relevant stakeholders and how to activate different communities so as to engage the far future. This article makes the dilemmas posed by decisions about such methodological issues transparent and therefore articulates the role of plausibility in anticipatory governance.
Jing, Helen G; Madore, Kevin P; Schacter, Daniel L
2017-12-01
A critical adaptive feature of future thinking involves the ability to generate alternative versions of possible future events. However, little is known about the nature of the processes that support this ability. Here we examined whether an episodic specificity induction - brief training in recollecting details of a recent experience that selectively impacts tasks that draw on episodic retrieval - (1) boosts alternative event generation and (2) changes one's initial perceptions of negative future events. In Experiment 1, an episodic specificity induction significantly increased the number of alternative positive outcomes that participants generated to a series of standardized negative events, compared with a control induction not focused on episodic specificity. We also observed larger decreases in the perceived plausibility and negativity of the original events in the specificity condition, where participants generated more alternative outcomes, relative to the control condition. In Experiment 2, we replicated and extended these findings using a series of personalized negative events. Our findings support the idea that episodic memory processes are involved in generating alternative outcomes to anticipated future events, and that boosting the number of alternative outcomes is related to subsequent changes in the perceived plausibility and valence of the original events, which may have implications for psychological well-being. Published by Elsevier B.V.
Norman, Laura M.; Feller, Mark; Villarreal, Miguel L.
2012-01-01
The SLEUTH urban growth model is applied to a binational dryland watershed to envision and evaluate plausible future scenarios of land use change into the year 2050. Our objective was to create a suite of geospatial footprints portraying potential land use change that can be used to aid binational decision-makers in assessing the impacts relative to sustainability of natural resources and potential socio-ecological consequences of proposed land-use management. Three alternatives are designed to simulate different conditions: (i) a Current Trends Scenario of unmanaged exponential growth, (ii) a Conservation Scenario with managed growth to protect the environment, and (iii) a Megalopolis Scenario in which growth is accentuated around a defined international trade corridor. The model was calibrated with historical data extracted from a time series of satellite images. Model materials, methodology, and results are presented. Our Current Trends Scenario predicts the footprint of urban growth to approximately triple from 2009 to 2050, which is corroborated by local population estimates. The Conservation Scenario results in protecting 46% more of the Evergreen class (more than 150,000 acres) than the Current Trends Scenario and approximately 95,000 acres of Barren Land, Crops, Deciduous Forest (Mesquite Bosque), Grassland/Herbaceous, Urban/Recreational Grasses, and Wetlands classes combined. The Megalopolis Scenario results also depict the preservation of some of these land-use classes compared to the Current Trends Scenario, most notably in the environmentally important headwaters region. Connectivity and areal extent of land cover types that provide wildlife habitat were preserved under the alternative scenarios when compared to Current Trends.
ELM control with RMP: plasma response models and the role of edge peeling response
NASA Astrophysics Data System (ADS)
Liu, Yueqiang; Ham, C. J.; Kirk, A.; Li, Li; Loarte, A.; Ryan, D. A.; Sun, Youwen; Suttrop, W.; Yang, Xu; Zhou, Lina
2016-11-01
Resonant magnetic perturbations (RMP) have extensively been demonstrated as a plausible technique for mitigating or suppressing large edge localized modes (ELMs). Associated with this is a substantial amount of theory and modelling efforts during recent years. Various models describing the plasma response to the RMP fields have been proposed in the literature, and are briefly reviewed in this work. Despite their simplicity, linear response models can provide alternative criteria, than the vacuum field based criteria, for guiding the choice of the coil configurations to achieve the best control of ELMs. The role of the edge peeling response to the RMP fields is illustrated as a key indicator for the ELM mitigation in low collisionality plasmas, in various tokamak devices.
Smith, Kenneth J
2010-04-01
Conley and You assessed the plausibility of three alternative model specifications of the relations between role stressors (i.e., role conflict, role ambiguity, and role overload) and organizational commitment, satisfaction, and turnover intentions among a sample of 178 teachers employed in four Southern California high schools. Using structural equations modeling procedures to evaluate their data, the authors reported the best fit for their "fully mediated effects" model wherein there was a "strong causal path from role ambiguity and role conflict --> satisfaction --> commitment --> intentions to leave" (p. 781). This note addresses methodological issues with the present study and provides suggestions for follow-up efforts designed to replicate and/or extend this line of research.
A biologically plausible computational model for auditory object recognition.
Larson, Eric; Billimoria, Cyrus P; Sen, Kamal
2009-01-01
Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.
Quantitative structure - mesothelioma potency model ...
Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar
GPS constraints on M 7-8 earthquake recurrence times for the New Madrid seismic zone
Stuart, W.D.
2001-01-01
Newman et al. (1999) estimate the time interval between the 1811-1812 earthquake sequence near New Madrid, Missouri and a future similar sequence to be at least 2,500 years, an interval significantly longer than other recently published estimates. To calculate the recurrence time, they assume that slip on a vertical half-plane at depth contributes to the current interseismic motion of GPS benchmarks. Compared to other plausible fault models, the half-plane model gives nearly the maximum rate of ground motion for the same interseismic slip rate. Alternative models with smaller interseismic fault slip area can satisfy the present GPS data by having higher slip rate and thus can have earthquake recurrence times much less than 2,500 years.
ERIC Educational Resources Information Center
Gracia, Lidamar M.
2015-01-01
Hiring alternative certified teachers has been a plausible solution to meeting the growing needs for highly qualified teachers in the classroom. Not only is there a need for teachers, there is also a need for strong leadership in education. Newly hired teachers must not only be prepared to tackle the everyday issues and responsibilities of the…
NASA Astrophysics Data System (ADS)
Acharya, S.; Kaplan, D. A.; Casey, S.; Cohen, M. J.; Jawitz, J. W.
2015-05-01
Self-organized landscape patterning can arise in response to multiple processes. Discriminating among alternative patterning mechanisms, particularly where experimental manipulations are untenable, requires process-based models. Previous modeling studies have attributed patterning in the Everglades (Florida, USA) to sediment redistribution and anisotropic soil hydraulic properties. In this work, we tested an alternate theory, the self-organizing-canal (SOC) hypothesis, by developing a cellular automata model that simulates pattern evolution via local positive feedbacks (i.e., facilitation) coupled with a global negative feedback based on hydrology. The model is forced by global hydroperiod that drives stochastic transitions between two patch types: ridge (higher elevation) and slough (lower elevation). We evaluated model performance using multiple criteria based on six statistical and geostatistical properties observed in reference portions of the Everglades landscape: patch density, patch anisotropy, semivariogram ranges, power-law scaling of ridge areas, perimeter area fractal dimension, and characteristic pattern wavelength. Model results showed strong statistical agreement with reference landscapes, but only when anisotropically acting local facilitation was coupled with hydrologic global feedback, for which several plausible mechanisms exist. Critically, the model correctly generated fractal landscapes that had no characteristic pattern wavelength, supporting the invocation of global rather than scale-specific negative feedbacks.
NASA Astrophysics Data System (ADS)
Acharya, S.; Kaplan, D. A.; Casey, S.; Cohen, M. J.; Jawitz, J. W.
2015-01-01
Self-organized landscape patterning can arise in response to multiple processes. Discriminating among alternative patterning mechanisms, particularly where experimental manipulations are untenable, requires process-based models. Previous modeling studies have attributed patterning in the Everglades (Florida, USA) to sediment redistribution and anisotropic soil hydraulic properties. In this work, we tested an alternate theory, the self-organizing canal (SOC) hypothesis, by developing a cellular automata model that simulates pattern evolution via local positive feedbacks (i.e., facilitation) coupled with a global negative feedback based on hydrology. The model is forced by global hydroperiod that drives stochastic transitions between two patch types: ridge (higher elevation) and slough (lower elevation). We evaluated model performance using multiple criteria based on six statistical and geostatistical properties observed in reference portions of the Everglades landscape: patch density, patch anisotropy, semivariogram ranges, power-law scaling of ridge areas, perimeter area fractal dimension, and characteristic pattern wavelength. Model results showed strong statistical agreement with reference landscapes, but only when anisotropically acting local facilitation was coupled with hydrologic global feedback, for which several plausible mechanisms exist. Critically, the model correctly generated fractal landscapes that had no characteristic pattern wavelength, supporting the invocation of global rather than scale-specific negative feedbacks.
The Universal Plausibility Metric (UPM) & Principle (UPP).
Abel, David L
2009-12-03
Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of xi is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (xi < 1).
ERIC Educational Resources Information Center
Phillips, Lawrence; Pearl, Lisa
2015-01-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…
What if? Neural activity underlying semantic and episodic counterfactual thinking.
Parikh, Natasha; Ruzic, Luka; Stewart, Gregory W; Spreng, R Nathan; De Brigard, Felipe
2018-05-25
Counterfactual thinking (CFT) is the process of mentally simulating alternative versions of known facts. In the past decade, cognitive neuroscientists have begun to uncover the neural underpinnings of CFT, particularly episodic CFT (eCFT), which activates regions in the default network (DN) also activated by episodic memory (eM) recall. However, the engagement of DN regions is different for distinct kinds of eCFT. More plausible counterfactuals and counterfactuals about oneself show stronger activity in DN regions compared to implausible and other- or object-focused counterfactuals. The current study sought to identify a source for this difference in DN activity. Specifically, self-focused counterfactuals may also be more plausible, suggesting that DN core regions are sensitive to the plausibility of a simulation. On the other hand, plausible and self-focused counterfactuals may involve more episodic information than implausible and other-focused counterfactuals, which would imply DN sensitivity to episodic information. In the current study, we compared episodic and semantic counterfactuals generated to be plausible or implausible against episodic and semantic memory reactivation using fMRI. Taking multivariate and univariate approaches, we found that the DN is engaged more during episodic simulations, including eM and all eCFT, than during semantic simulations. Semantic simulations engaged more inferior temporal and lateral occipital regions. The only region that showed strong plausibility effects was the hippocampus, which was significantly engaged for implausible CFT but not for plausible CFT, suggestive of binding more disparate information. Consequences of these findings for the cognitive neuroscience of mental simulation are discussed. Published by Elsevier Inc.
Robustness of Reconstructed Ancestral Protein Functions to Statistical Uncertainty.
Eick, Geeta N; Bridgham, Jamie T; Anderson, Douglas P; Harms, Michael J; Thornton, Joseph W
2017-02-01
Hypotheses about the functions of ancient proteins and the effects of historical mutations on them are often tested using ancestral protein reconstruction (APR)-phylogenetic inference of ancestral sequences followed by synthesis and experimental characterization. Usually, some sequence sites are ambiguously reconstructed, with two or more statistically plausible states. The extent to which the inferred functions and mutational effects are robust to uncertainty about the ancestral sequence has not been studied systematically. To address this issue, we reconstructed ancestral proteins in three domain families that have different functions, architectures, and degrees of uncertainty; we then experimentally characterized the functional robustness of these proteins when uncertainty was incorporated using several approaches, including sampling amino acid states from the posterior distribution at each site and incorporating the alternative amino acid state at every ambiguous site in the sequence into a single "worst plausible case" protein. In every case, qualitative conclusions about the ancestral proteins' functions and the effects of key historical mutations were robust to sequence uncertainty, with similar functions observed even when scores of alternate amino acids were incorporated. There was some variation in quantitative descriptors of function among plausible sequences, suggesting that experimentally characterizing robustness is particularly important when quantitative estimates of ancient biochemical parameters are desired. The worst plausible case method appears to provide an efficient strategy for characterizing the functional robustness of ancestral proteins to large amounts of sequence uncertainty. Sampling from the posterior distribution sometimes produced artifactually nonfunctional proteins for sequences reconstructed with substantial ambiguity. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Radically questioning the principle of the least restrictive alternative: a reply to Nir Eyal
Saghai, Yashar
2014-01-01
In his insightful editorial, Nir Eyal explores the connections between nudging and shaming. One upshot of his argument is that we should question the principle of the least restrictive alternative in public health and health policy. In this commentary, I maintain that Eyal’s argument undermines only a rather implausible version of the principle of the least restrictive alternative and I sketch two reasons for rejecting the mainstream and more plausible version of this principle. PMID:25396212
Between cheap and costly signals: the evolution of partially honest communication.
Zollman, Kevin J S; Bergstrom, Carl T; Huttegger, Simon M
2013-01-07
Costly signalling theory has become a common explanation for honest communication when interests conflict. In this paper, we provide an alternative explanation for partially honest communication that does not require significant signal costs. We show that this alternative is at least as plausible as traditional costly signalling, and we suggest a number of experiments that might be used to distinguish the two theories.
The Universal Plausibility Metric (UPM) & Principle (UPP)
2009-01-01
Background Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of ξ is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. Conclusion No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (ξ < 1). PMID:19958539
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiser, C.; McIntosh, L.
The rise in alternative respiratory capacity upon aging of potato (Solanum tuberosum) tuber slices is correlated with changes in mitochondrial membrane protein composition and a requirement for cytoplasmic protein synthesis. However, the lack of an antibody specific to the alternative oxidase has, until recently, prevented examination of the alternative oxidase protein(s) itself. We have employed a monoclonal antibody raised against the Sauromatum guttatum alternative oxidase to investigate developmental changes in the alternative pathway of aging potato slice mitochondria and to characterize the potato alternative oxidase by one- and two-dimensional gel electrophoresis. The relative levels of a 36 kilodalton protein parallelmore » the rise in alternative path capacity. A plausible interpretation is that this alternative oxidase protein is synthesized de novo during aging of potato slices.« less
Flood risk assessment and robust management under deep uncertainty: Application to Dhaka City
NASA Astrophysics Data System (ADS)
Mojtahed, Vahid; Gain, Animesh Kumar; Giupponi, Carlo
2014-05-01
The socio-economic changes as well as climatic changes have been the main drivers of uncertainty in environmental risk assessment and in particular flood. The level of future uncertainty that researchers face when dealing with problems in a future perspective with focus on climate change is known as Deep Uncertainty (also known as Knightian uncertainty), since nobody has already experienced and undergone those changes before and our knowledge is limited to the extent that we have no notion of probabilities, and therefore consolidated risk management approaches have limited potential.. Deep uncertainty is referred to circumstances that analysts and experts do not know or parties to decision making cannot agree on: i) the appropriate models describing the interaction among system variables, ii) probability distributions to represent uncertainty about key parameters in the model 3) how to value the desirability of alternative outcomes. The need thus emerges to assist policy-makers by providing them with not a single and optimal solution to the problem at hand, such as crisp estimates for the costs of damages of natural hazards considered, but instead ranges of possible future costs, based on the outcomes of ensembles of assessment models and sets of plausible scenarios. Accordingly, we need to substitute optimality as a decision criterion with robustness. Under conditions of deep uncertainty, the decision-makers do not have statistical and mathematical bases to identify optimal solutions, while instead they should prefer to implement "robust" decisions that perform relatively well over all conceivable outcomes out of all future unknown scenarios. Under deep uncertainty, analysts cannot employ probability theory or other statistics that usually can be derived from observed historical data and therefore, we turn to non-statistical measures such as scenario analysis. We construct several plausible scenarios with each scenario being a full description of what may happen in future and based on a meaningful synthesis of parameters' values with control of their correlations for maintaining internal consistencies. This paper aims at incorporating a set of data mining and sampling tools to assess uncertainty of model outputs under future climatic and socio-economic changes for Dhaka city and providing a decision support system for robust flood management and mitigation policies. After constructing an uncertainty matrix to identify the main sources of uncertainty for Dhaka City, we identify several hazard and vulnerability maps based on future climatic and socio-economic scenarios. The vulnerability of each flood management alternative under different set of scenarios is determined and finally the robustness of each plausible solution considered is defined based on the above assessment.
FORMAL SCENARIO DEVELOPMENT FOR ENVIRONMENTAL IMPACT ASSESSMENT STUDIES
Scenario analysis is a process of evaluating possible future events through the consideration of alternative plausible (though not equally likely) outcomes (scenarios). The analysis is designed to enable improved decision-making and assessment through a more rigorous evaluation o...
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso
2017-01-01
SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522
Saghai, Yashar
2014-11-01
In his insightful editorial, Nir Eyal explores the connections between nudging and shaming. One upshot of his argument is that we should question the principle of the least restrictive alternative in public health and health policy. In this commentary, I maintain that Eyal's argument undermines only a rather implausible version of the principle of the least restrictive alternative and I sketch two reasons for rejecting the mainstream and more plausible version of this principle.
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
Climate change, resource use and food security in midcentury under a range of plausible scenarios
NASA Astrophysics Data System (ADS)
Wiebe, K.
2016-12-01
Achieving and maintaining food security at local, national and global scales is challenged by changes in population, income and climate, among other socioeconomic and biophysical drivers. Assessing these challenges and possible solutions over the coming decades requires a systematic and multidisciplinary approach. The Global Futures and Strategic Foresight program, a CGIAR initiative led by the International Food Policy Research Institute in collaboration with the 14 other CGIAR research centers, is working to improve tools and conduct ex ante assessments of promising technologies, investments and policies under alternative global futures to inform decision making in the CGIAR and its partners. Alternative socioeconomic and climate scenarios are explored using an integrated system of climate, water, crop and economic models. This presentation will share findings from recent projections of food production and prices to 2050 at global and regional scales, together with their potential implications for land and water use, food security, nutrition and health.
NASA Astrophysics Data System (ADS)
van Elewyck, V.
2010-12-01
Cataclysmic cosmic events can be plausible sources of both gravitational waves (GW) and high-energy neutrinos (HEN). Both GW and HEN are alternative cosmic messengers that may escape very dense media and travel unaffected over cosmological distances. For this reason, they could also reveal new or hidden sources that were not observed by conventional photon astronomy, such as the putative failed GRBs. After a brief discussion on the plausible common sources of GW and HEN, this constribution presents the strategies for coincident searches of GW and HEN that are currently developed by the ANTARES and VIRGO/LIGO collaborations within the GWHEN working group.
Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming
2014-11-01
We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.
Alternative Splicing May Not Be the Key to Proteome Complexity.
Tress, Michael L; Abascal, Federico; Valencia, Alfonso
2017-02-01
Alternative splicing is commonly believed to be a major source of cellular protein diversity. However, although many thousands of alternatively spliced transcripts are routinely detected in RNA-seq studies, reliable large-scale mass spectrometry-based proteomics analyses identify only a small fraction of annotated alternative isoforms. The clearest finding from proteomics experiments is that most human genes have a single main protein isoform, while those alternative isoforms that are identified tend to be the most biologically plausible: those with the most cross-species conservation and those that do not compromise functional domains. Indeed, most alternative exons do not seem to be under selective pressure, suggesting that a large majority of predicted alternative transcripts may not even be translated into proteins. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data
NASA Astrophysics Data System (ADS)
Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam
2018-04-01
Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.
Impaired associative learning in schizophrenia: behavioral and computational studies
Diwadkar, Vaibhav A.; Flaugher, Brad; Jones, Trevor; Zalányi, László; Ujfalussy, Balázs; Keshavan, Matcheri S.
2008-01-01
Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity, memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible computational models will help clarify the nature of these deficits. We present a preliminary computational model designed to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental research and a detailed computational model of performance provides a framework within which to infer plausible biological bases of impaired learning dynamics in schizophrenia. PMID:19003486
Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling
NASA Astrophysics Data System (ADS)
Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.
2002-05-01
Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.
Accounting for heterogeneity in meta-analysis using a multiplicative model-an empirical study.
Mawdsley, David; Higgins, Julian P T; Sutton, Alex J; Abrams, Keith R
2017-03-01
In meta-analysis, the random-effects model is often used to account for heterogeneity. The model assumes that heterogeneity has an additive effect on the variance of effect sizes. An alternative model, which assumes multiplicative heterogeneity, has been little used in the medical statistics community, but is widely used by particle physicists. In this paper, we compare the two models using a random sample of 448 meta-analyses drawn from the Cochrane Database of Systematic Reviews. In general, differences in goodness of fit are modest. The multiplicative model tends to give results that are closer to the null, with a narrower confidence interval. Both approaches make different assumptions about the outcome of the meta-analysis. In our opinion, the selection of the more appropriate model will often be guided by whether the multiplicative model's assumption of a single effect size is plausible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Romero-Martínez, Martín; Téllez-Rojo Solís, Martha María; Sandoval-Zárate, América Andrea; Zurita-Luna, Juan Manuel; Gutiérrez-Reyes, Juan Pablo
2013-01-01
To determine the presence of bias on the estimation of the consumption sometime in life of alcohol, tobacco or illegal drugs and inhalable substances, and to propose a correction for this in the case it is present. Mexican National Addictions Surveys (NAS) 2002, 2008, and 2011 were analyzed to compare population estimations of consumption sometime in life of tobacco, alcohol or illegal drugs and inhalable substances. A couple of alternative approaches for bias correction were developed. Estimated national prevalences of consumption sometime in life of alcohol and tobacco in the NAS 2008 are not plausible. There was no evidence of bias on the consumption sometime in life of illegal drugs and inhalable substances. New estimations for tobacco and alcohol consumption sometime in life were made, which resulted in plausible values when compared to other data available. Future analyses regarding tobacco and alcohol using NAS 2008 data will have to rely on these newly generated data weights, that are able to reproduce the new (plausible) estimations.
Günther, Fritz; Marelli, Marco
2016-01-01
Noun compounds, consisting of two nouns (the head and the modifier) that are combined into a single concept, differ in terms of their plausibility: school bus is a more plausible compound than saddle olive. The present study investigates which factors influence the plausibility of attested and novel noun compounds. Distributional Semantic Models (DSMs) are used to obtain formal (vector) representations of word meanings, and compositional methods in DSMs are employed to obtain such representations for noun compounds. From these representations, different plausibility measures are computed. Three of those measures contribute in predicting the plausibility of noun compounds: The relatedness between the meaning of the head noun and the compound (Head Proximity), the relatedness between the meaning of modifier noun and the compound (Modifier Proximity), and the similarity between the head noun and the modifier noun (Constituent Similarity). We find non-linear interactions between Head Proximity and Modifier Proximity, as well as between Modifier Proximity and Constituent Similarity. Furthermore, Constituent Similarity interacts non-linearly with the familiarity with the compound. These results suggest that a compound is perceived as more plausible if it can be categorized as an instance of the category denoted by the head noun, if the contribution of the modifier to the compound meaning is clear but not redundant, and if the constituents are sufficiently similar in cases where this contribution is not clear. Furthermore, compounds are perceived to be more plausible if they are more familiar, but mostly for cases where the relation between the constituents is less clear. PMID:27732599
Linking river management to species conservation using dynamic landscape scale models
Freeman, Mary C.; Buell, Gary R.; Hay, Lauren E.; Hughes, W. Brian; Jacobson, Robert B.; Jones, John W.; Jones, S.A.; LaFontaine, Jacob H.; Odom, Kenneth R.; Peterson, James T.; Riley, Jeffrey W.; Schindler, J. Stephen; Shea, C.; Weaver, J.D.
2013-01-01
Efforts to conserve stream and river biota could benefit from tools that allow managers to evaluate landscape-scale changes in species distributions in response to water management decisions. We present a framework and methods for integrating hydrology, geographic context and metapopulation processes to simulate effects of changes in streamflow on fish occupancy dynamics across a landscape of interconnected stream segments. We illustrate this approach using a 482 km2 catchment in the southeastern US supporting 50 or more stream fish species. A spatially distributed, deterministic and physically based hydrologic model is used to simulate daily streamflow for sub-basins composing the catchment. We use geographic data to characterize stream segments with respect to channel size, confinement, position and connectedness within the stream network. Simulated streamflow dynamics are then applied to model fish metapopulation dynamics in stream segments, using hypothesized effects of streamflow magnitude and variability on population processes, conditioned by channel characteristics. The resulting time series simulate spatially explicit, annual changes in species occurrences or assemblage metrics (e.g. species richness) across the catchment as outcomes of management scenarios. Sensitivity analyses using alternative, plausible links between streamflow components and metapopulation processes, or allowing for alternative modes of fish dispersal, demonstrate large effects of ecological uncertainty on model outcomes and highlight needed research and monitoring. Nonetheless, with uncertainties explicitly acknowledged, dynamic, landscape-scale simulations may prove useful for quantitatively comparing river management alternatives with respect to species conservation.
Language Learning Podcasts and Learners' Belief Change
ERIC Educational Resources Information Center
Basaran, Süleyman; Cabaroglu, Nese
2014-01-01
The ubiquitous use of Internet-based mobile devices in educational contexts means that mobile learning has become a plausible alternative to or a good complement for conventional classroom-based teaching. However, there is a lack of research that explores and defines the characteristics and effects of mobile language learning (LL) through language…
Cognitive Defusion versus thought Distraction in the Mitigation of Learned Helplessness
ERIC Educational Resources Information Center
Hooper, Nic; McHugh, Louise
2013-01-01
Recent research suggests that attempting to avoid unwanted psychological events is maladaptive. Contrastingly, cognitive defusion, which is an acceptance-based method for managing unwanted thoughts, may provide a plausible alternative. The current study was designed to compare defusion and experiential avoidance as strategies for coping with…
Criaud, Marion; Longcamp, Marieke; Anton, Jean-Luc; Nazarian, Bruno; Roth, Muriel; Sescousse, Guillaume; Strafella, Antonio P; Ballanger, Bénédicte; Boulinguez, Philippe
2017-08-30
The neural mechanisms underlying response inhibition and related disorders are unclear and controversial for several reasons. First, it is a major challenge to assess the psychological bases of behaviour, and ultimately brain-behaviour relationships, of a function which is precisely intended to suppress overt measurable behaviours. Second, response inhibition is difficult to disentangle from other parallel processes involved in more general aspects of cognitive control. Consequently, different psychological and anatomo-functional models coexist, which often appear in conflict with each other even though they are not necessarily mutually exclusive. The standard model of response inhibition in go/no-go tasks assumes that inhibitory processes are reactively and selectively triggered by the stimulus that participants must refrain from reacting to. Recent alternative models suggest that action restraint could instead rely on reactive but non-selective mechanisms (all automatic responses are automatically inhibited in uncertain contexts) or on proactive and non-selective mechanisms (a gating function by which reaction to any stimulus is prevented in anticipation of stimulation when the situation is unpredictable). Here, we assessed the physiological plausibility of these different models by testing their respective predictions regarding event-related BOLD modulations (forward inference using fMRI). We set up a single fMRI design which allowed for us to record simultaneously the different possible forms of inhibition while limiting confounds between response inhibition and parallel cognitive processes. We found BOLD dynamics consistent with non-selective models. These results provide new theoretical and methodological lines of inquiry for the study of basic functions involved in behavioural control and related disorders. Copyright © 2017 Elsevier B.V. All rights reserved.
Crupi, Vincenzo; Nelson, Jonathan D; Meder, Björn; Cevolani, Gustavo; Tentori, Katya
2018-06-17
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism. Copyright © 2018 Cognitive Science Society, Inc.
Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Freiwald, Winrich A; Poggio, Tomaso
2017-01-09
The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations, like depth rotations [1, 2]. Current computational models of object recognition, including recent deep-learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3-6]. Here, we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here, we demonstrate that one specific biologically plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli, like faces, at intermediate levels of the architecture and show why it does so. Thus, the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reconstruction of the static magnetic field of a magnetron
NASA Astrophysics Data System (ADS)
Krüger, Dennis; Köhn, Kevin; Gallian, Sara; Brinkmann, Ralf Peter
2018-06-01
The simulation of magnetron discharges requires a quantitatively correct mathematical model of the magnetic field structure. This study presents a method to construct such a model on the basis of a spatially restricted set of experimental data and a plausible a priori assumption on the magnetic field configuration. The example in focus is that of a planar circular magnetron. The experimental data are Hall probe measurements of the magnetic flux density in an accessible region above the magnetron plane [P. D. Machura et al., Plasma Sources Sci. Technol. 23, 065043 (2014)]. The a priori assumption reflects the actual design of the device, and it takes the magnetic field emerging from a center magnet of strength m C and vertical position d C and a ring magnet of strength m R , vertical position d R , and radius R. An analytical representation of the assumed field configuration can be formulated in terms of generalized hypergeometric functions. Fitting the ansatz to the experimental data with a least square method results in a fully specified analytical field model that agrees well with the data inside the accessible region and, moreover, is physically plausible in the regions outside of it. The outcome proves superior to the result of an alternative approach which starts from a multimode solution of the vacuum field problem formulated in terms of polar Bessel functions and vertical exponentials. As a first application of the obtained field model, typical electron and ion Larmor radii and the gradient and curvature drift velocities of the electron guiding center are calculated.
Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K
2015-03-27
Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.
Using foresight methods to anticipate future threats: the case of disease management.
Ma, Sai; Seid, Michael
2006-01-01
We describe a unique foresight framework for health care managers to use in longer-term planning. This framework uses scenario-building to envision plausible alternate futures of the U.S. health care system and links those broad futures to business-model-specific "load-bearing" assumptions. Because the framework we describe simultaneously addresses very broad and very specific issues, it can be easily applied to a broad range of health care issues by using the broad framework and business-specific assumptions for the particular case at hand. We illustrate this method using the case of disease management, pointing out that although the industry continues to grow rapidly, its future also contains great uncertainties.
Insights from mathematical modeling of renal tubular function.
Weinstein, A M
1998-01-01
Mathematical models of proximal tubule have been developed which represent the important solute species within the constraints of known cytosolic concentrations, transport fluxes, and overall epithelial permeabilities. In general, model simulations have been used to assess the quantitative feasibility of what appear to be qualitatively plausible mechanisms, or alternatively, to identify incomplete rationalization of experimental observations. The examples considered include: (1) proximal water reabsorption, for which the lateral interspace is a locus for solute-solvent coupling; (2) ammonia secretion, for which the issue is prioritizing driving forces - transport on the Na+/H+ exchanger, on the Na,K-ATPase, or ammoniagenesis; (3) formate-stimulated NaCl reabsorption, for which simple addition of a luminal membrane chloride/formate exchanger fails to represent experimental observation, and (4) balancing luminal entry and peritubular exit, in which ATP-dependent peritubular K+ channels have been implicated, but appear unable to account for the bulk of proximal tubule cell volume homeostasis.
The Regime Shift Associated with the 2004–2008 US Housing Market Bubble
Cheong, Siew Ann
2016-01-01
The Subprime Bubble preceding the Subprime Crisis of 2008 was fueled by risky lending practices, manifesting in the form of a large abrupt increase in the proportion of subprime mortgages issued in the US. This event also coincided with critical slowing down signals associated with instability, which served as evidence of a regime shift or phase transition in the US housing market. Here, we show that the US housing market underwent a regime shift between alternate stable states consistent with the observed critical slowing down signals. We modeled this regime shift on a universal transition path and validated the model by estimating when the bubble burst. Additionally, this model reveals loose monetary policy to be a plausible cause of the phase transition, implying that the bubble might have been deflatable by a timely tightening of monetary policy. PMID:27583633
Evidence accumulation as a model for lexical selection.
Anders, R; Riès, S; van Maanen, L; Alario, F X
2015-11-01
We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.
A one-dimensional model of solid-earth electrical resistivity beneath Florida
Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua
2015-11-19
An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Growth rates of rainbow smelt in Lake Champlain: Effects of density and diet
Stritzel, Thomson J.L.; Parrish, D.L.; Parker-Stetter, S. L.; Rudstam, L. G.; Sullivan, P.J.
2011-01-01
Stritzel Thomson JL, Parrish DL, Parker-Stetter SL, Rudstam LG, Sullivan PJ. Growth rates of rainbow smelt in Lake Champlain: effects of density and diet. Ecology of Freshwater Fish 2010. ?? 2010 John Wiley & Sons A/S Abstract- We estimated the densities of rainbow smelt (Osmerus mordax) using hydroacoustics and obtained specimens for diet analysis and groundtruthed acoustics data from mid-water trawl sampling in four areas of Lake Champlain, USA-Canada. Densities of rainbow smelt cohorts alternated during the 2-year study; age-0 rainbow smelt were very abundant in 2001 (up to 6fish per m2) and age-1 and older were abundant (up to 1.2fish per m2) in 2002. Growth rates and densities varied among areas and years. We used model selection on eight area-year-specific variables to investigate biologically plausible predictors of rainbow smelt growth rates. The best supported model of growth rates of age-0 smelt indicated a negative relationship with age-0 density, likely associated with intraspecific competition for zooplankton. The next best-fit model had age-1 density as a predictor of age-0 growth. The best supported models (N=4) of growth rates of age-1 fish indicated a positive relationship with availability of age-0 smelt and resulting levels of cannibalism. Other plausible models were contained variants of these parameters. Cannibalistic rainbow smelt consumed younger conspecifics that were up to 53% of their length. Prediction of population dynamics for rainbow smelt requires an understanding of the relationship between density and growth as age-0 fish outgrow their main predators (adult smelt) by autumn in years with fast growth rates, but not in years with slow growth rates. ?? 2011 John Wiley & Sons A/S.
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
Transient rheology of the uppermost mantle beneath the Mojave Desert, California
Pollitz, F.F.
2003-01-01
Geodetic data indicate that the M7.1 Hector Mine, California, earthquake was followed by a brief period (a few weeks) of rapid deformation preceding a prolonged phase of slower deformation. We find that the signal contained in continuous and campaign global positioning system data for 2.5 years after the earthquake may be explained with a transient rheology. Quantitative modeling of these data with allowance for transient (linear biviscous) rheology in the lower crust and upper mantle demonstrates that transient rheology in the upper mantle is dominant, its material properties being typified by two characteristic relaxation times ???0.07 and ???2 years. The inferred mantle rheology is a Jeffreys solid in which the transient and steady-state shear moduli are equal. Consideration of a simpler viscoelastic model with a linear univiscous rheology (2 fewer parameters than a biviscous model) shows that it consistently underpredicts the amplitude of the first ???3 months signal, and allowance for a biviscous rheology is significant at the 99.0% confidence level. Another alternative model - deep postseismic afterslip beneath the coseismic rupture - predicts a vertical velocity pattern opposite to the observed pattern at all time periods considered. Despite its plausibility, the advocated biviscous rheology model is non-unique and should be regarded as a viable alternative to the non-linear mantle rheology model for governing postseismic flow beneath the Mojave Desert. Published by Elsevier B.V.
Moritz, S; Veckenstedt, R; Randjbar, S; Hottenrott, B; Woodward, T S; von Eckstaedt, F V; Schmidt, C; Jelinek, L; Lincoln, T M
2009-11-01
Cognitive biases, especially jumping to conclusions (JTC), are ascribed a vital role in the pathogenesis of schizophrenia. This study set out to explore motivational factors for JTC using a newly developed paradigm. Twenty-seven schizophrenia patients and 32 healthy controls were shown 15 classical paintings, divided into three blocks. Four alternative titles (one correct and three lure titles) had to be appraised according to plausibility (0-10). Optionally, participants could decide for one option and reject one or more alternatives. In random order across blocks, anxiety-evoking music, happy music or no music was played in the background. Patients with schizophrenia, particularly those with delusions, made more decisions than healthy subjects. In line with the liberal acceptance (LA) account of schizophrenia, the decision threshold was significantly lowered in patients relative to controls. Patients were also more prone than healthy controls to making a decision when the distance between the first and second best alternative was close. Furthermore, implausible alternatives were judged as significantly more plausible by patients. Anxiety-evoking music resulted in more decisions in currently deluded patients relative to non-deluded patients and healthy controls. The results confirm predictions derived from the LA account and assert that schizophrenia patients decide hastily under conditions of continued uncertainty. The fact that mood induction did not exert an overall effect could be due to the explicit nature of the manipulation, which might have evoked strategies to counteract their influence.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
NASA Astrophysics Data System (ADS)
Wieder, William R.; Cleveland, Cory C.; Lawrence, David M.; Bonan, Gordon B.
2015-04-01
Uncertainties in terrestrial carbon (C) cycle projections increase uncertainty of potential climate feedbacks. Efforts to improve model performance often include increased representation of biogeochemical processes, such as coupled carbon-nitrogen (N) cycles. In doing so, models are becoming more complex, generating structural uncertainties in model form that reflect incomplete knowledge of how to represent underlying processes. Here, we explore structural uncertainties associated with biological nitrogen fixation (BNF) and quantify their effects on C cycle projections. We find that alternative plausible structures to represent BNF result in nearly equivalent terrestrial C fluxes and pools through the twentieth century, but the strength of the terrestrial C sink varies by nearly a third (50 Pg C) by the end of the twenty-first century under a business-as-usual climate change scenario representative concentration pathway 8.5. These results indicate that actual uncertainty in future C cycle projections may be larger than previously estimated, and this uncertainty will limit C cycle projections until model structures can be evaluated and refined.
A Workflow for Global Sensitivity Analysis of PBPK Models
McNally, Kevin; Cotton, Richard; Loizou, George D.
2011-01-01
Physiologically based pharmacokinetic (PBPK) models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilized to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis (SA) technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined the elements of a workflow for SA of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot), which we believe is intuitive and appropriate for toxicologists, risk assessors, and regulators. PMID:21772819
Alternative community structures in a kelp-urchin community: A qualitative modeling approach
Montano-Moctezuma, G.; Li, H.W.; Rossignol, P.A.
2007-01-01
Shifts in interaction patterns within a community may result from periodic disturbances and climate. The question arises as to the extent and significance of these shifting patterns. Using a novel approach to link qualitative mathematical models and field data, namely using the inverse matrix to identify the community matrix, we reconstructed community networks from kelp forests off the Oregon Coast. We simulated all ecologically plausible interactions among community members, selected the models whose outcomes match field observations, and identified highly frequent links to characterize the community network from a particular site. We tested all possible biologically reasonable community networks through qualitative simulations, selected those that matched patterns observed in the field, and further reduced the set of possibilities by retaining those that were stable. We found that a community can be represented by a set of alternative structures, or scenarios. From 11,943,936 simulated models, 0.23% matched the field observations; moreover, only 0.006%, or 748 models, were highly reliable in their predictions and met conditions for stability. Predator-prey interactions as well as non-predatory relationships were consistently found in most of the 748 models. These highly frequent connections were useful to characterize the community network in the study site. We suggest that alternative networks provide the community with a buffer to disturbance, allowing it to continuously reorganize to adapt to a variable environment. This is possible due to the fluctuating capacities of foraging species to consume alternate resources. This suggestion is sustained by our results, which indicate that none of the models that matched field observations were fully connected. This plasticity may contribute to the persistence of these communities. We propose that qualitative simulations represent a powerful technique to raise new hypotheses concerning community dynamics and to reconstruct guidelines that may govern community patterns. ?? 2007 Elsevier B.V. All rights reserved.
Differences in the Weighting and Choice of Evidence for Plausible versus Implausible Causes
ERIC Educational Resources Information Center
Goedert, Kelly M.; Ellefson, Michelle R.; Rehder, Bob
2014-01-01
Individuals have difficulty changing their causal beliefs in light of contradictory evidence. We hypothesized that this difficulty arises because people facing implausible causes give greater consideration to causal alternatives, which, because of their use of a positive test strategy, leads to differential weighting of contingency evidence.…
ERIC Educational Resources Information Center
Spinella, Sarah
2011-01-01
As result replicability is essential to science and difficult to achieve through external replicability, the present paper notes the insufficiency of null hypothesis statistical significance testing (NHSST) and explains the bootstrap as a plausible alternative, with a heuristic example to illustrate the bootstrap method. The bootstrap relies on…
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Rein, David B; Wittenborn, John S; Zhang, Xinzhi; Allaire, Benjamin A; Song, Michael S; Klein, Ronald; Saaddine, Jinan B
2011-01-01
Objective To determine whether biennial eye evaluation or telemedicine screening are cost-effective alternatives to current recommendations for the estimated 10 million people aged 30–84 with diabetes but no or minimal diabetic retinopathy. Data Sources United Kingdom Prospective Diabetes Study, National Health and Nutrition Examination Survey, American Academy of Ophthalmology Preferred Practice Patterns, Medicare Payment Schedule. Study Design Cost-effectiveness Monte Carlo simulation. Data Collection/Extraction Methods Literature review, analysis of existing surveys. Principal Findings Biennial eye evaluation was the most cost-effective treatment option when the ability to detect other eye conditions was included in the model. Telemedicine was most cost-effective when other eye conditions were not considered or when telemedicine was assumed to detect refractive error. The current annual eye evaluation recommendation was costly compared with either treatment alternative. Self-referral was most cost-effective up to a willingness to pay (WTP) of U.S.$37,600, with either biennial or annual evaluation most cost-effective at higher WTP levels. Conclusions Annual eye evaluations are costly and add little benefit compared with either plausible alternative. More research on the ability of telemedicine to detect other eye conditions is needed to determine whether it is more cost-effective than biennial eye evaluation. PMID:21492158
Biologically Plausible, Human-scale Knowledge Representation
ERIC Educational Resources Information Center
Crawford, Eric; Gingerich, Matthew; Eliasmith, Chris
2016-01-01
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, 1993), "mesh" binding (van der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990). Recent theoretical work has suggested that…
Esperón-Rodríguez, Manuel; Baumgartner, John B.; Beaumont, Linda J.
2017-01-01
Background Shrubs play a key role in biogeochemical cycles, prevent soil and water erosion, provide forage for livestock, and are a source of food, wood and non-wood products. However, despite their ecological and societal importance, the influence of different environmental variables on shrub distributions remains unclear. We evaluated the influence of climate and soil characteristics, and whether including soil variables improved the performance of a species distribution model (SDM), Maxent. Methods This study assessed variation in predictions of environmental suitability for 29 Australian shrub species (representing dominant members of six shrubland classes) due to the use of alternative sets of predictor variables. Models were calibrated with (1) climate variables only, (2) climate and soil variables, and (3) soil variables only. Results The predictive power of SDMs differed substantially across species, but generally models calibrated with both climate and soil data performed better than those calibrated only with climate variables. Models calibrated solely with soil variables were the least accurate. We found regional differences in potential shrub species richness across Australia due to the use of different sets of variables. Conclusions Our study provides evidence that predicted patterns of species richness may be sensitive to the choice of predictor set when multiple, plausible alternatives exist, and demonstrates the importance of considering soil properties when modeling availability of habitat for plants. PMID:28652933
Minimal models of electric potential oscillations in non-excitable membranes.
Perdomo, Guillermo; Hernández, Julio A
2010-01-01
Sustained oscillations in the membrane potential have been observed in a variety of cellular and subcellular systems, including several types of non-excitable cells and mitochondria. For the plasma membrane, these electrical oscillations have frequently been related to oscillations in intracellular calcium. For the inner mitochondrial membrane, in several cases the electrical oscillations have been attributed to modifications in calcium dynamics. As an alternative, some authors have suggested that the sustained oscillations in the mitochondrial membrane potential induced by some metabolic intermediates depends on the direct effect of internal protons on proton conductance. Most theoretical models developed to interpret oscillations in the membrane potential integrate several transport and biochemical processes. Here we evaluate whether three simple dynamic models may constitute plausible representations of electric oscillations in non-excitable membranes. The basic mechanism considered in the derivation of the models is based upon evidence obtained by Hattori et al. for mitochondria and assumes that an ionic species (i.e., the proton) is transported via passive and active transport systems between an external and an internal compartment and that the ion affects the kinetic properties of transport by feedback regulation. The membrane potential is incorporated via its effects on kinetic properties. The dynamic properties of two of the models enable us to conclude that they may represent alternatives enabling description of the generation of electrical oscillations in membranes that depend on the transport of a single ionic species.
Jet or Shock Breakout? The Low-Luminosity GRB 060218
NASA Astrophysics Data System (ADS)
Irwin, Christopher; Chevalier, Roger
2016-01-01
We consider a model for the long-duration, low-luminosity gamma-ray burst GRB 060218 that plausibly accounts for multiwavelength observations to day 20. The components of our model are: (1) a long-lived (tj ~ 3000 s) central engine and accompanying low-luminosity (Lj ~ 1045 erg s-1), mildly relativistic jet; (2) a low-mass (~ 10-2 Msun) envelope surrounding the progenitor star; and (3) a modest amount of dust (AV ~ 0.1) in the circumstellar or interstellar environment. Blackbody emission from the transparency radius in a low-power jet outflow can fit the prompt thermal X-ray emission, and the prompt nonthermal X-rays and γ-rays may be produced via Compton scattering of thermal photons from hot leptons in the jet interior or the external shocks. The later mildly relativistic phase of this outflow can produce the radio emission via synchrotron radiation from the forward shock. Meanwhile, interaction of the associated SN 2006aj with a circumstellar envelope extending to ~ 1013 cm can explain the early optical peak. The X-ray afterglow can be interpreted as a light echo of the prompt emission from dust at ~ 30 pc. Our model is a plausible alternative to that of Nakar, who recently proposed shock breakout of a jet smothered by an extended envelope as the source of prompt emission. Both our results and Nakar's suggest that ultra-long bursts such as GRB 060218 and GRB 100316D may originate from unusual progenitors with extended circumstellar envelopes, and that a jet is necessary to decouple the prompt high-energy emission from the supernova.
Jet or shock breakout? The low-luminosity GRB 060218
NASA Astrophysics Data System (ADS)
Irwin, Christopher M.; Chevalier, Roger A.
2016-08-01
We consider a model for the low-luminosity gamma-ray burst GRB 060218 that plausibly accounts for multiwavelength observations to day 20. The model components are: (1) a long-lived (tj ˜ 3000 s) central engine and accompanying low-luminosity (Lj ˜ 1047 erg s-1), mildly relativistic (γ ˜ 10) jet; (2) a low-mass (˜4 × 10-3 M⊙) envelope surrounding the progenitor star; and (3) a modest amount of dust (AV ˜ 0.1 mag) in the circumstellar or interstellar environment. Blackbody emission from the transparency radius in a low-power jet outflow can fit the prompt thermal X-ray emission, and the non-thermal X-rays and gamma-rays may be produced via Compton scattering of thermal photons from hot leptons in the jet interior or the external shocks. The later mildly relativistic phase of this outflow can produce the radio emission via synchrotron radiation from the forward shock. Meanwhile, interaction of the associated SN 2006aj with a circumstellar envelope extending to ˜1013 cm can explain the early optical emission. The X-ray afterglow can be interpreted as a light echo of the prompt emission from dust at ˜30 pc. Our model is a plausible alternative to that of Nakar, who recently proposed shock breakout of a jet smothered by an extended envelope as the source of prompt emission. Both our results and Nakar's suggest that bursts such as GRB 060218 may originate from unusual progenitors with extended circumstellar envelopes, and that a jet is necessary to decouple the prompt emission from the supernova.
Freeman, Daniel; Garety, Philippa A; Fowler, David; Kuipers, Elizabeth; Bebbington, Paul E; Dunn, Graham
2004-08-01
Delusions can be viewed as explanations of experiences,. By definition, the experiences are insufficient to merit the delusional explanations. So why have delusions been accepted rather than more realistic explanations? The authors report a study of alternative explanations in 100 individuals with delusions. Patients were assessed on the following criteria: symptom measures, the evidence for the delusions, the availability of alternative explanations, reasoning, and self-esteem. Three quarters of the patients did not report any alternative explanation for the experiences on which the delusions were based. These patients reported significantly more internal anomalous experiences and had a more hasty reasoning style than patients who did have alternative explanations available. Having doubt in a delusion, without an alternative explanation, was associated with lower self-esteem. Clinicians will need to develop plausible and compelling alternative accounts of experience in interventions rather than merely challenge patients' delusional beliefs.
Illiquidity premium and expected stock returns in the UK: A new approach
NASA Astrophysics Data System (ADS)
Chen, Jiaqi; Sherif, Mohamed
2016-09-01
This study examines the relative importance of liquidity risk for the time-series and cross-section of stock returns in the UK. We propose a simple way to capture the multidimensionality of illiquidity. Our analysis indicates that existing illiquidity measures have considerable asset specific components, which justifies our new approach. Further, we use an alternative test of the Amihud (2002) measure and parametric and non-parametric methods to investigate whether liquidity risk is priced in the UK. We find that the inclusion of the illiquidity factor in the capital asset pricing model plays a significant role in explaining the cross-sectional variation in stock returns, in particular with the Fama-French three-factor model. Further, using Hansen-Jagannathan non-parametric bounds, we find that the illiquidity-augmented capital asset pricing models yield a small distance error, other non-liquidity based models fail to yield economically plausible distance values. Our findings have important implications for managing the liquidity risk of equity portfolios.
Bayesian analysis of caustic-crossing microlensing events
NASA Astrophysics Data System (ADS)
Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.
2010-06-01
Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.
Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less
2011-01-01
Background Monitoring the time course of mortality by cause is a key public health issue. However, several mortality data production changes may affect cause-specific time trends, thus altering the interpretation. This paper proposes a statistical method that detects abrupt changes ("jumps") and estimates correction factors that may be used for further analysis. Methods The method was applied to a subset of the AMIEHS (Avoidable Mortality in the European Union, toward better Indicators for the Effectiveness of Health Systems) project mortality database and considered for six European countries and 13 selected causes of deaths. For each country and cause of death, an automated jump detection method called Polydect was applied to the log mortality rate time series. The plausibility of a data production change associated with each detected jump was evaluated through literature search or feedback obtained from the national data producers. For each plausible jump position, the statistical significance of the between-age and between-gender jump amplitude heterogeneity was evaluated by means of a generalized additive regression model, and correction factors were deduced from the results. Results Forty-nine jumps were detected by the Polydect method from 1970 to 2005. Most of the detected jumps were found to be plausible. The age- and gender-specific amplitudes of the jumps were estimated when they were statistically heterogeneous, and they showed greater by-age heterogeneity than by-gender heterogeneity. Conclusion The method presented in this paper was successfully applied to a large set of causes of death and countries. The method appears to be an alternative to bridge coding methods when the latter are not systematically implemented because they are time- and resource-consuming. PMID:21929756
ERIC Educational Resources Information Center
Bozeman, Barry; Landsbergen, David
1989-01-01
Two competing approaches to policy analysis are distinguished: a credibility approach, and a truth approach. According to the credibility approach, the policy analyst's role is to search for plausible argument rather than truth. Each approach has pragmatic tradeoffs in fulfilling the goal of providing usable knowledge to decision makers. (TJH)
NASA Astrophysics Data System (ADS)
Jeuland, Marc; Whittington, Dale
2014-03-01
This article presents a methodology for planning new water resources infrastructure investments and operating strategies in a world of climate change uncertainty. It combines a real options (e.g., options to defer, expand, contract, abandon, switch use, or otherwise alter a capital investment) approach with principles drawn from robust decision-making (RDM). RDM comprises a class of methods that are used to identify investment strategies that perform relatively well, compared to the alternatives, across a wide range of plausible future scenarios. Our proposed framework relies on a simulation model that includes linkages between climate change and system hydrology, combined with sensitivity analyses that explore how economic outcomes of investments in new dams vary with forecasts of changing runoff and other uncertainties. To demonstrate the framework, we consider the case of new multipurpose dams along the Blue Nile in Ethiopia. We model flexibility in design and operating decisions—the selection, sizing, and sequencing of new dams, and reservoir operating rules. Results show that there is no single investment plan that performs best across a range of plausible future runoff conditions. The decision-analytic framework is then used to identify dam configurations that are both robust to poor outcomes and sufficiently flexible to capture high upside benefits if favorable future climate and hydrological conditions should arise. The approach could be extended to explore design and operating features of development and adaptation projects other than dams.
McLelland, Douglas; VanRullen, Rufin
2016-10-01
Several theories have been advanced to explain how cross-frequency coupling, the interaction of neuronal oscillations at different frequencies, could enable item multiplexing in neural systems. The communication-through-coherence theory proposes that phase-matching of gamma oscillations between areas enables selective processing of a single item at a time, and a later refinement of the theory includes a theta-frequency oscillation that provides a periodic reset of the system. Alternatively, the theta-gamma neural code theory proposes that a sequence of items is processed, one per gamma cycle, and that this sequence is repeated or updated across theta cycles. In short, both theories serve to segregate representations via the temporal domain, but differ on the number of objects concurrently represented. In this study, we set out to test whether each of these theories is actually physiologically plausible, by implementing them within a single model inspired by physiological data. Using a spiking network model of visual processing, we show that each of these theories is physiologically plausible and computationally useful. Both theories were implemented within a single network architecture, with two areas connected in a feedforward manner, and gamma oscillations generated by feedback inhibition within areas. Simply increasing the amplitude of global inhibition in the lower area, equivalent to an increase in the spatial scope of the gamma oscillation, yielded a switch from one mode to the other. Thus, these different processing modes may co-exist in the brain, enabling dynamic switching between exploratory and selective modes of attention.
On the homocentric spheres of Eudoxus.
NASA Astrophysics Data System (ADS)
Yavetz, I.
1998-03-01
In 1877, Schiaparelli published a classic essay on the homocentric spheres of Eudoxus, which became the standard, definitive historical reconstruction of Eudoxian planetary theory. The purpose of the present paper is to show that the two texts on which Schiaparelli based his reconstruction do not lead in an unequivocal way to this interpretation, and that they actually accomodate alternative and equally plausible interpretations that possess a clear astronomical superiority compared to Schiaparelli's. One alternative interpretation is elaborated here in detail. Thereby, it is shown that the exclusivity traditionally awarded to Schiaparelli's reconstruction can no longer be maintained, and that the little historical evidence we do possess does not enable us to make a choice between the availble alternatives.
Patents or patients? Global access to pharmaceuticals and social justice.
de Wildt, Gilles; Khoon, Chan Chee
2008-01-01
Innovation, vaccine development, and world-wide equitable access to necessary pharmaceuticals are hindered by current patenting arrangements and the orientation of pharmaceutical research. Plausible alternatives exist, including instituting the right of national or international agencies to act in the public interest and to buy patents selectively with a view to innovation and equitable access. Alternatives could partly or wholly finance themselves and lower pharmaceutical prices globally. Countries, individuals or groups of patients could help promote alternatives by calling into question the current emphasis on commercialization and profit, and by demanding globally equitable arrangements when sharing data that are important for research or when individuals or communities volunteer as research participants.
Amichetti, Nicole M; White, Alison G; Wingfield, Arthur
2016-01-01
A fundamental question in psycholinguistic theory is whether equivalent success in sentence comprehension may come about by different underlying operations. Of special interest is whether adult aging, especially when accompanied by reduced hearing acuity, may shift the balance of reliance on formal syntax vs. plausibility in determining sentence meaning. In two experiments participants were asked to identify the thematic roles in grammatical sentences that contained either plausible or implausible semantic relations. Comprehension of sentence meanings was indexed by the ability to correctly name the agent or the recipient of an action represented in the sentence. In Experiment 1 young and older adults' comprehension was tested for plausible and implausible sentences with the meaning expressed with either an active-declarative or a passive syntactic form. In Experiment 2 comprehension performance was examined for young adults with age-normal hearing, older adults with good hearing acuity, and age-matched older adults with mild-to-moderate hearing loss for plausible or implausible sentences with meaning expressed with either a subject-relative (SR) or an object-relative (OR) syntactic structure. Experiment 1 showed that the likelihood of interpreting a sentence according to its literal meaning was reduced when that meaning expressed an implausible relationship. Experiment 2 showed that this likelihood was further decreased for OR as compared to SR sentences, and especially so for older adults whose hearing impairment added to the perceptual challenge. Experiment 2 also showed that working memory capacity as measured with a letter-number sequencing task contributed to the likelihood that listeners would base their comprehension responses on the literal syntax even when this processing scheme yielded an implausible meaning. Taken together, the results of both experiments support the postulate that listeners may use more than a single uniform processing strategy for successful sentence comprehension, with the existence of these alternative solutions only revealed when literal syntax and plausibility do not coincide.
Amichetti, Nicole M.; White, Alison G.; Wingfield, Arthur
2016-01-01
A fundamental question in psycholinguistic theory is whether equivalent success in sentence comprehension may come about by different underlying operations. Of special interest is whether adult aging, especially when accompanied by reduced hearing acuity, may shift the balance of reliance on formal syntax vs. plausibility in determining sentence meaning. In two experiments participants were asked to identify the thematic roles in grammatical sentences that contained either plausible or implausible semantic relations. Comprehension of sentence meanings was indexed by the ability to correctly name the agent or the recipient of an action represented in the sentence. In Experiment 1 young and older adults’ comprehension was tested for plausible and implausible sentences with the meaning expressed with either an active-declarative or a passive syntactic form. In Experiment 2 comprehension performance was examined for young adults with age-normal hearing, older adults with good hearing acuity, and age-matched older adults with mild-to-moderate hearing loss for plausible or implausible sentences with meaning expressed with either a subject-relative (SR) or an object-relative (OR) syntactic structure. Experiment 1 showed that the likelihood of interpreting a sentence according to its literal meaning was reduced when that meaning expressed an implausible relationship. Experiment 2 showed that this likelihood was further decreased for OR as compared to SR sentences, and especially so for older adults whose hearing impairment added to the perceptual challenge. Experiment 2 also showed that working memory capacity as measured with a letter-number sequencing task contributed to the likelihood that listeners would base their comprehension responses on the literal syntax even when this processing scheme yielded an implausible meaning. Taken together, the results of both experiments support the postulate that listeners may use more than a single uniform processing strategy for successful sentence comprehension, with the existence of these alternative solutions only revealed when literal syntax and plausibility do not coincide. PMID:27303346
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Simulating human behavior for national security human interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard, Michael Lewis; Hart, Dereck H.; Verzi, Stephen J.
2007-01-01
This 3-year research and development effort focused on what we believe is a significant technical gap in existing modeling and simulation capabilities: the representation of plausible human cognition and behaviors within a dynamic, simulated environment. Specifically, the intent of the ''Simulating Human Behavior for National Security Human Interactions'' project was to demonstrate initial simulated human modeling capability that realistically represents intra- and inter-group interaction behaviors between simulated humans and human-controlled avatars as they respond to their environment. Significant process was made towards simulating human behaviors through the development of a framework that produces realistic characteristics and movement. The simulated humansmore » were created from models designed to be psychologically plausible by being based on robust psychological research and theory. Progress was also made towards enhancing Sandia National Laboratories existing cognitive models to support culturally plausible behaviors that are important in representing group interactions. These models were implemented in the modular, interoperable, and commercially supported Umbra{reg_sign} simulation framework.« less
Rhodes, Katherine T; Branum-Martin, Lee; Morris, Robin D; Romski, MaryAnn; Sevcik, Rose A
2015-11-01
Although it is often assumed that mathematics ability alone predicts mathematics test performance, linguistic demands may also predict achievement. This study examined the role of language in mathematics assessment performance for children with intellectual disability (ID) at less severe levels, on the KeyMath-Revised Inventory (KM-R) with a sample of 264 children, in grades 2-5. Using confirmatory factor analysis, the hypothesis that the KM-R would demonstrate discriminant validity with measures of language abilities in a two-factor model was compared to two plausible alternative models. Results indicated that KM-R did not have discriminant validity with measures of children's language abilities and was a multidimensional test of both mathematics and language abilities for this population of test users. Implications are considered for test development, interpretation, and intervention.
The adverse health effects of chronic cannabis use.
Hall, Wayne; Degenhardt, Louisa
2014-01-01
This paper summarizes the most probable of the adverse health effects of regular cannabis use sustained over years, as indicated by epidemiological studies that have established an association between cannabis use and adverse outcomes; ruled out reverse causation; and controlled for plausible alternative explanations. We have also focused on adverse outcomes for which there is good evidence of biological plausibility. The focus is on those adverse health effects of greatest potential public health significance--those that are most likely to occur and to affect a substantial proportion of regular cannabis users. These most probable adverse effects of regular use include a dependence syndrome, impaired respiratory function, cardiovascular disease, adverse effects on adolescent psychosocial development and mental health, and residual cognitive impairment. Copyright © 2013 John Wiley & Sons, Ltd.
Sutherland, John D
2010-04-01
It has normally been assumed that ribonucleotides arose on the early Earth through a process in which ribose, the nucleobases, and phosphate became conjoined. However, under plausible prebiotic conditions, condensation of nucleobases with ribose to give beta-ribonucleosides is fraught with difficulties. The reaction with purine nucleobases is low-yielding and the reaction with the canonical pyrimidine nucleobases does not work at all. The reasons for these difficulties are considered and an alternative high-yielding synthesis of pyrimidine nucleotides is discussed. Fitting the new synthesis to a plausible geochemical scenario is a remaining challenge but the prospects appear good. Discovery of an improved method of purine synthesis, and an efficient means of stringing activated nucleotides together, will provide underpinning support to those theories that posit a central role for RNA in the origins of life.
Discrimination of numerical proportions: A comparison of binomial and Gaussian models.
Raidvee, Aire; Lember, Jüri; Allik, Jüri
2017-01-01
Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.
Lexical is as lexical does: computational approaches to lexical representation
Woollams, Anna M.
2015-01-01
In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204
Byrne, Ruth M J
2016-01-01
People spontaneously create counterfactual alternatives to reality when they think "if only" or "what if" and imagine how the past could have been different. The mind computes counterfactuals for many reasons. Counterfactuals explain the past and prepare for the future, they implicate various relations including causal ones, and they affect intentions and decisions. They modulate emotions such as regret and relief, and they support moral judgments such as blame. The loss of the ability to imagine alternatives as a result of injuries to the prefrontal cortex is devastating. The basic cognitive processes that compute counterfactuals mutate aspects of the mental representation of reality to create an imagined alternative, and they compare alternative representations. The ability to create counterfactuals develops throughout childhood and contributes to reasoning about other people's beliefs, including their false beliefs. Knowledge affects the plausibility of a counterfactual through the semantic and pragmatic modulation of the mental representation of alternative possibilities.
ELF on a Mushroom: The Overnight Growth in English as a Lingua Franca
ERIC Educational Resources Information Center
Sowden, Colin
2012-01-01
In an effort to curtail native-speaker dominance of global English, and in recognition of the growing role of the language among non-native speakers from different first-language backgrounds, some academics have been urging the teaching of English as a Lingua Franca (ELF). Although at first this proposal seems to offer a plausible alternative to…
ERIC Educational Resources Information Center
Lombardi, Douglas Adler
2012-01-01
The Intergovernmental Panel on Climate Change (2007) reported a greater than 90% chance that human activities are responsible for global temperature increases over the last 50 years, as well as other climatic changes. The scientific report also states that alternative explanations (e.g., increasing energy received from the Sun) are less plausible…
Corticomuscular transmission of tremor signals by propriospinal neurons in Parkinson's disease.
Hao, Manzhao; He, Xin; Xiao, Qin; Alstermark, Bror; Lan, Ning
2013-01-01
Cortical oscillatory signals of single and double tremor frequencies act together to cause tremor in the peripheral limbs of patients with Parkinson's disease (PD). But the corticospinal pathway that transmits the tremor signals has not been clarified, and how alternating bursts of antagonistic muscle activations are generated from the cortical oscillatory signals is not well understood. This paper investigates the plausible role of propriospinal neurons (PN) in C3-C4 in transmitting the cortical oscillatory signals to peripheral muscles. Kinematics data and surface electromyogram (EMG) of tremor in forearm were collected from PD patients. A PN network model was constructed based on known neurophysiological connections of PN. The cortical efferent signal of double tremor frequencies were integrated at the PN network, whose outputs drove the muscles of a virtual arm (VA) model to simulate tremor behaviors. The cortical efferent signal of single tremor frequency actuated muscle spindles. By comparing tremor data of PD patients and the results of model simulation, we examined two hypotheses regarding the corticospinal transmission of oscillatory signals in Parkinsonian tremor. Hypothesis I stated that the oscillatory cortical signals were transmitted via the mono-synaptic corticospinal pathways bypassing the PN network. The alternative hypothesis II stated that they were transmitted by way of PN multi-synaptic corticospinal pathway. Simulations indicated that without the PN network, the alternating burst patterns of antagonistic muscle EMGs could not be reliably generated, rejecting the first hypothesis. However, with the PN network, the alternating burst patterns of antagonist EMGs were naturally reproduced under all conditions of cortical oscillations. The results suggest that cortical commands of single and double tremor frequencies are further processed at PN to compute the alternating burst patterns in flexor and extensor muscles, and the neuromuscular dynamics demonstrated a frequency dependent damping on tremor, which may prevent tremor above 8 Hz to occur.
Corticomuscular Transmission of Tremor Signals by Propriospinal Neurons in Parkinson's Disease
Hao, Manzhao; He, Xin; Xiao, Qin; Alstermark, Bror; Lan, Ning
2013-01-01
Cortical oscillatory signals of single and double tremor frequencies act together to cause tremor in the peripheral limbs of patients with Parkinson's disease (PD). But the corticospinal pathway that transmits the tremor signals has not been clarified, and how alternating bursts of antagonistic muscle activations are generated from the cortical oscillatory signals is not well understood. This paper investigates the plausible role of propriospinal neurons (PN) in C3–C4 in transmitting the cortical oscillatory signals to peripheral muscles. Kinematics data and surface electromyogram (EMG) of tremor in forearm were collected from PD patients. A PN network model was constructed based on known neurophysiological connections of PN. The cortical efferent signal of double tremor frequencies were integrated at the PN network, whose outputs drove the muscles of a virtual arm (VA) model to simulate tremor behaviors. The cortical efferent signal of single tremor frequency actuated muscle spindles. By comparing tremor data of PD patients and the results of model simulation, we examined two hypotheses regarding the corticospinal transmission of oscillatory signals in Parkinsonian tremor. Hypothesis I stated that the oscillatory cortical signals were transmitted via the mono-synaptic corticospinal pathways bypassing the PN network. The alternative hypothesis II stated that they were transmitted by way of PN multi-synaptic corticospinal pathway. Simulations indicated that without the PN network, the alternating burst patterns of antagonistic muscle EMGs could not be reliably generated, rejecting the first hypothesis. However, with the PN network, the alternating burst patterns of antagonist EMGs were naturally reproduced under all conditions of cortical oscillations. The results suggest that cortical commands of single and double tremor frequencies are further processed at PN to compute the alternating burst patterns in flexor and extensor muscles, and the neuromuscular dynamics demonstrated a frequency dependent damping on tremor, which may prevent tremor above 8 Hz to occur. PMID:24278189
Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten
2017-08-01
Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Johnson, Matthew D.; Anderson, Jared R.; Walker, Ann; Wilcox, Allison; Lewis, Virginia L.; Robbins, David C.
2014-01-01
Using cross-sectional data from 117 married couples in which one member is diagnosed with type 2 diabetes, the current study sought to explore a possible indirect association between common dyadic coping and dietary and exercise adherence via the mechanism of patient and spouse reports of diabetes efficacy. Results from the structural equation model analysis indicated common dyadic coping was associated with higher levels of diabetes efficacy for both patients and spouses which, in turn, was then associated with better dietary and exercise adherence for the patient. This model proved a better fit to the data than three plausible alternative models. The bootstrap test of mediation revealed common dyadic coping was indirectly associated with dietary adherence via both patient and spouse diabetes efficacy, but spouse diabetes efficacy was the only mechanism linking common dyadic coping and exercise adherence. This study highlights the importance of exploring the indirect pathways through which general intimate relationship functioning might be associated with type 2 diabetes outcomes. PMID:24015707
Evaluating disease management program effectiveness: an introduction to survival analysis.
Linden, Ariel; Adams, John L; Roberts, Nancy
2004-01-01
Currently, the most widely used method in the disease management industry for evaluating program effectiveness is the "total population approach." This model is a pretest-posttest design, with the most basic limitation being that without a control group, there may be sources of bias and/or competing extraneous confounding factors that offer plausible rationale explaining the change from baseline. Survival analysis allows for the inclusion of data from censored cases, those subjects who either "survived" the program without experiencing the event (e.g., achievement of target clinical levels, hospitalization) or left the program prematurely, due to disenrollement from the health plan or program, or were lost to follow-up. Additionally, independent variables may be included in the model to help explain the variability in the outcome measure. In order to maximize the potential of this statistical method, validity of the model and research design must be assured. This paper reviews survival analysis as an alternative, and more appropriate, approach to evaluating DM program effectiveness than the current total population approach.
Robustness of multidimensional Brownian ratchets as directed transport mechanisms.
González-Candela, Ernesto; Romero-Rochín, Víctor; Del Río, Fernando
2011-08-07
Brownian ratchets have recently been considered as models to describe the ability of certain systems to locate very specific states in multidimensional configuration spaces. This directional process has particularly been proposed as an alternative explanation for the protein folding problem, in which the polypeptide is driven toward the native state by a multidimensional Brownian ratchet. Recognizing the relevance of robustness in biological systems, in this work we analyze such a property of Brownian ratchets by pushing to the limits all the properties considered essential to produce directed transport. Based on the results presented here, we can state that Brownian ratchets are able to deliver current and locate funnel structures under a wide range of conditions. As a result, they represent a simple model that solves the Levinthal's paradox with great robustness and flexibility and without requiring any ad hoc biased transition probability. The behavior of Brownian ratchets shown in this article considerably enhances the plausibility of the model for at least part of the structural mechanism behind protein folding process.
A mass weighted chemical elastic network model elucidates closed form domain motions in proteins
Kim, Min Hyeok; Seo, Sangjae; Jeong, Jay Il; Kim, Bum Joon; Liu, Wing Kam; Lim, Byeong Soo; Choi, Jae Boong; Kim, Moon Ki
2013-01-01
An elastic network model (ENM), usually Cα coarse-grained one, has been widely used to study protein dynamics as an alternative to classical molecular dynamics simulation. This simple approach dramatically saves the computational cost, but sometimes fails to describe a feasible conformational change due to unrealistically excessive spring connections. To overcome this limitation, we propose a mass-weighted chemical elastic network model (MWCENM) in which the total mass of each residue is assumed to be concentrated on the representative alpha carbon atom and various stiffness values are precisely assigned according to the types of chemical interactions. We test MWCENM on several well-known proteins of which both closed and open conformations are available as well as three α-helix rich proteins. Their normal mode analysis reveals that MWCENM not only generates more plausible conformational changes, especially for closed forms of proteins, but also preserves protein secondary structures thus distinguishing MWCENM from traditional ENMs. In addition, MWCENM also reduces computational burden by using a more sparse stiffness matrix. PMID:23456820
Hippocampal mechanisms for the context-dependent retrieval of episodes
Hasselmo, Michael E.; Eichenbaum, Howard B.
2008-01-01
Behaviors ranging from delivering newspapers to waiting tables depend on remembering previous episodes to avoid incorrect repetition. Physiologically, this requires mechanisms for long-term storage and selective retrieval of episodes based on time of occurrence, despite variable intervals and similarity of events in a familiar environment. Here, this process has been modeled based on physiological properties of the hippocampal formation, including mechanisms for sustained activity in entorhinal cortex and theta rhythm oscillations in hippocampal subregions. The model simulates the context-sensitive firing properties of hippocampal neurons including trial specific firing during spatial alternation and trial by trial changes in theta phase precession on a linear track. This activity is used to guide behavior, and lesions of the hippocampal network impair memory-guided behavior. The model links data at the cellular level to behavior at the systems level, describing a physiologically plausible mechanism for the brain to recall a given episode which occurred at a specific place and time. PMID:16263240
NASA Astrophysics Data System (ADS)
Pyt'ev, Yu. P.
2018-01-01
mathematical formalism for subjective modeling, based on modelling of uncertainty, reflecting unreliability of subjective information and fuzziness that is common for its content. The model of subjective judgments on values of an unknown parameter x ∈ X of the model M( x) of a research object is defined by the researcher-modeler as a space1 ( X, p( X), P{I^{\\bar x}}, Be{l^{\\bar x}}) with plausibility P{I^{\\bar x}} and believability Be{l^{\\bar x}} measures, where x is an uncertain element taking values in X that models researcher—modeler's uncertain propositions about an unknown x ∈ X, measures P{I^{\\bar x}}, Be{l^{\\bar x}} model modalities of a researcher-modeler's subjective judgments on the validity of each x ∈ X: the value of P{I^{\\bar x}}(\\tilde x = x) determines how relatively plausible, in his opinion, the equality (\\tilde x = x) is, while the value of Be{l^{\\bar x}}(\\tilde x = x) determines how the inequality (\\tilde x = x) should be relatively believed in. Versions of plausibility Pl and believability Bel measures and pl- and bel-integrals that inherit some traits of probabilities, psychophysics and take into account interests of researcher-modeler groups are considered. It is shown that the mathematical formalism of subjective modeling, unlike "standard" mathematical modeling, •enables a researcher-modeler to model both precise formalized knowledge and non-formalized unreliable knowledge, from complete ignorance to precise knowledge of the model of a research object, to calculate relative plausibilities and believabilities of any features of a research object that are specified by its subjective model M(\\tilde x), and if the data on observations of a research object is available, then it: •enables him to estimate the adequacy of subjective model to the research objective, to correct it by combining subjective ideas and the observation data after testing their consistency, and, finally, to empirically recover the model of a research object.
A model for the evolution of CO2 on Mars
NASA Technical Reports Server (NTRS)
Haberle, R. M.; Tyler, D.; Mckay, C. P.; Davis, W. L.
1993-01-01
There are several lines of evidence that suggest early Mars was warmer and wetter than it is at present. Perhaps the most convincing of these are the valley networks and degraded craters that characterize much of the ancient terrains. In both cases, fluvial activity associated with liquid water is believed to be involved. Thus, Mars appears to have had a warmer climate early in its history than it does today. How much warmer is not clear, but a common perception has been that global mean surface temperatures must have been near freezing - almost 55 K warmer than at present. The most plausible way to increase surface temperatures is through the greenhouse effect, and the most plausible greenhouse gas is CO2. Pollack et al. estimate that in the presence of the faint young Sun, the early Martian atmosphere would have to contain almost 5 bar of CO2 to raise the mean surface temperature up to the freezing level; only 1 bar would be required if the fluvial features were formed near the calculations now appear to be wrong since Kasting showed that CO2 will condense in the atmosphere at these pressures and that this greatly reduces the greenhouse effect of a pure CO2 atmosphere. He suggested that alternative greenhouse gases such as CH4 or NH3, are required. The early Mars dilemma is approached from a slightly different point of view. In particular, a model for the evolution of CO2 on Mars that draws upon published processes that affect such evolution was constructed. Thus, the model accounts for the variation of solar luminosity with time, the greenhouse effect, regolith uptake, polar cap formation, escape, and weathering.
NASA Astrophysics Data System (ADS)
Wilson, T. S.; Sleeter, B. M.; Sherba, J.; Cameron, D.
2014-12-01
Human land use will increasingly contribute to habitat losses and water shortages in California, given future population projections and associated demand for agricultural land. Understanding how land-use change may impact future water use and where existing protected areas may be threatened by land-use conversion will be important if effective, sustainable management approaches are to be implemented. We used a state-and-transition simulation modeling (STSM) framework to simulate spatially-explicit (1 km2) historical (1992-2010) and future (2011-2060) land-use change for 52 California counties within the Mediterranean California ecoregion. Historical land use change estimates were derived from the Farmland Mapping and Monitoring Program (FMMP) dataset and attributed with county-level agricultural water-use data from the California Department of Water Resources (CDWR). Six future alternative land-use scenarios were developed and modeled using the historical land-use change estimates and land-use projections based on the Intergovernmental Panel on Climate Change's (IPCC) Special Report on Emission Scenarios (SRES) A2 and B1 scenarios. Resulting spatial land-use scenario outputs were combined based on scenario agreement and a land conversion threat index developed to evaluate vulnerability of existing protected areas. Modeled scenario output of county-level agricultural water use data were also summarized, enabling examination of alternative water use futures. We present results of two separate applications of STSM of land-use change, demonstrating the utility of STSM in analyzing land-use related impacts on water resources as well as potential threats to existing protected land. Exploring a range of alternative, yet plausible, land-use change impacts will help to better inform resource management and mitigation strategies.
A Nuclear Waste Management Cost Model for Policy Analysis
NASA Astrophysics Data System (ADS)
Barron, R. W.; Hill, M. C.
2017-12-01
Although integrated assessments of climate change policy have frequently identified nuclear energy as a promising alternative to fossil fuels, these studies have often treated nuclear waste disposal very simply. Simple assumptions about nuclear waste are problematic because they may not be adequate to capture relevant costs and uncertainties, which could result in suboptimal policy choices. Modeling nuclear waste management costs is a cross-disciplinary, multi-scale problem that involves economic, geologic and environmental processes that operate at vastly different temporal scales. Similarly, the climate-related costs and benefits of nuclear energy are dependent on environmental sensitivity to CO2 emissions and radiation, nuclear energy's ability to offset carbon emissions, and the risk of nuclear accidents, factors which are all deeply uncertain. Alternative value systems further complicate the problem by suggesting different approaches to valuing intergenerational impacts. Effective policy assessment of nuclear energy requires an integrated approach to modeling nuclear waste management that (1) bridges disciplinary and temporal gaps, (2) supports an iterative, adaptive process that responds to evolving understandings of uncertainties, and (3) supports a broad range of value systems. This work develops the Nuclear Waste Management Cost Model (NWMCM). NWMCM provides a flexible framework for evaluating the cost of nuclear waste management across a range of technology pathways and value systems. We illustrate how NWMCM can support policy analysis by estimating how different nuclear waste disposal scenarios developed using the NWMCM framework affect the results of a recent integrated assessment study of alternative energy futures and their effects on the cost of achieving carbon abatement targets. Results suggest that the optimism reflected in previous works is fragile: Plausible nuclear waste management costs and discount rates appropriate for intergenerational cost-benefit analysis produce many scenarios where nuclear energy is economically unattractive.
Multi-model inference for incorporating trophic and climate uncertainty into stock assessments
NASA Astrophysics Data System (ADS)
Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim
2016-12-01
Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.
Multipole models of four-image gravitational lenses with anomalous flux ratios
NASA Astrophysics Data System (ADS)
Congdon, Arthur B.; Keeton, Charles R.
2005-12-01
It has been known for over a decade that many four-image gravitational lenses exhibit anomalous radio flux ratios. These anomalies can be explained by adding a clumpy cold dark matter (CDM) component to the background galactic potential of the lens. As an alternative, Evans & Witt (2003) recently suggested that smooth multipole perturbations provide a reasonable alternative to CDM substructure in some but not all cases. We generalize their method in two ways so as to determine whether multipole models can explain highly anomalous systems. We carry the multipole expansion to higher order, and also include external tidal shear as a free parameter. Fitting for the shear proves crucial to finding a physical (positive-definite density) model. For B1422+231, working to order kmax= 5 (and including shear) yields a model that is physical but implausible. Going to higher order (kmax>~ 9) reduces global departures from ellipticity, but at the cost of introducing small-scale wiggles in proximity to the bright images. These localized undulations are more pronounced in B2045+265, where kmax~ 17 multipoles are required to smooth out large-scale deviations from elliptical symmetry. Such modes surely cannot be taken at face value; they must indicate that the models are trying to reproduce some other sort of structure. Our formalism naturally finds models that fit the data exactly, but we use B0712+472 to show that measurement uncertainties have little effect on our results. Finally, we consider the system B1933+503, where two sources are lensed by the same foreground galaxy. The additional constraints provided by the images of the second source render the multipole model unphysical. We conclude that external shear must be taken into account to obtain plausible models, and that a purely smooth angular structure for the lens galaxy does not provide a viable alternative to the prevailing CDM clump hypothesis.
On a theory of the evolution of surface cold fronts
NASA Technical Reports Server (NTRS)
Levy, Gad; Bretherton, Christopher S.
1987-01-01
The governing vorticity and divergence equations in the surface layer are derived and the roles of the different terms and feedback mechanisms are investigated in semigeostrophic and nongeostrophic cold-frontal systems. A planetary boundary layer model is used to perform sensitivity tests to determine that in a cold front the ageostrophic feedback mechanism as defined by Orlanski and Ross tends to act as a positive feedback mechanism, enhancing vorticity and convergence growth. Therefore, it cannot explain the phase shift between convergence and vorticity as simulated by Orlanski and Ross. An alternative plausible, though tentative, explanation in terms of a gravity wave is offered. It is shown that when the geostrophic deformation increases, nonlinear terms in the divergence equation may become important and further destabilize the system.
Evaluating disease management program effectiveness: an introduction to time-series analysis.
Linden, Ariel; Adams, John L; Roberts, Nancy
2003-01-01
Currently, the most widely used method in the disease management (DM) industry for evaluating program effectiveness is referred to as the "total population approach." This model is a pretest-posttest design, with the most basic limitation being that without a control group, there may be sources of bias and/or competing extraneous confounding factors that offer a plausible rationale explaining the change from baseline. Furthermore, with the current inclination of DM programs to use financial indicators rather than program-specific utilization indicators as the principal measure of program success, additional biases are introduced that may cloud evaluation results. This paper presents a non-technical introduction to time-series analysis (using disease-specific utilization measures) as an alternative, and more appropriate, approach to evaluating DM program effectiveness than the current total population approach.
Logan, Gordon D.
2017-01-01
We survey models of response inhibition having different degrees of mathematical, computational and neurobiological specificity and generality. The independent race model accounts for performance of the stop-signal or countermanding task in terms of a race between GO and STOP processes with stochastic finishing times. This model affords insights into neurophysiological mechanisms that are reviewed by other authors in this volume. The formal link between the abstract GO and STOP processes and instantiating neural processes is articulated through interactive race models consisting of stochastic accumulator GO and STOP units. This class of model provides quantitative accounts of countermanding performance and replicates the dynamics of neural activity producing that performance. The interactive race can be instantiated in a network of biophysically plausible spiking excitatory and inhibitory units. Other models seek to account for interactions between units in frontal cortex, basal ganglia and superior colliculus. The strengths, weaknesses and relationships of the different models will be considered. We will conclude with a brief survey of alternative modelling approaches and a summary of problems to be addressed including accounting for differences across effectors, species, individuals, task conditions and clinical deficits. This article is part of the themed issue ‘Movement suppression: brain mechanisms for stopping and stillness’. PMID:28242727
Dealing With Uncertainty When Assessing Fish Passage Through Culvert Road Crossings
NASA Astrophysics Data System (ADS)
Anderson, Gregory B.; Freeman, Mary C.; Freeman, Byron J.; Straight, Carrie A.; Hagler, Megan M.; Peterson, James T.
2012-09-01
Assessing the passage of aquatic organisms through culvert road crossings has become increasingly common in efforts to restore stream habitat. Several federal and state agencies and local stakeholders have adopted assessment approaches based on literature-derived criteria for culvert impassability. However, criteria differ and are typically specific to larger-bodied fishes. In an analysis to prioritize culverts for remediation to benefit imperiled, small-bodied fishes in the Upper Coosa River system in the southeastern United States, we assessed the sensitivity of prioritization to the use of differing but plausible criteria for culvert impassability. Using measurements at 256 road crossings, we assessed culvert impassability using four alternative criteria sets represented in Bayesian belief networks. Two criteria sets scored culverts as either passable or impassable based on alternative thresholds of culvert characteristics (outlet elevation, baseflow water velocity). Two additional criteria sets incorporated uncertainty concerning ability of small-bodied fishes to pass through culverts and estimated a probability of culvert impassability. To prioritize culverts for remediation, we combined estimated culvert impassability with culvert position in the stream network relative to other barriers to compute prospective gain in connected stream habitat for the target fish species. Although four culverts ranked highly for remediation regardless of which criteria were used to assess impassability, other culverts differed widely in priority depending on criteria. Our results emphasize the value of explicitly incorporating uncertainty into criteria underlying remediation decisions. Comparing outcomes among alternative, plausible criteria may also help to identify research most needed to narrow management uncertainty.
Dealing with uncertainty when assessing fish passage through culvert road crossings.
Anderson, Gregory B; Freeman, Mary C; Freeman, Byron J; Straight, Carrie A; Hagler, Megan M; Peterson, James T
2012-09-01
Assessing the passage of aquatic organisms through culvert road crossings has become increasingly common in efforts to restore stream habitat. Several federal and state agencies and local stakeholders have adopted assessment approaches based on literature-derived criteria for culvert impassability. However, criteria differ and are typically specific to larger-bodied fishes. In an analysis to prioritize culverts for remediation to benefit imperiled, small-bodied fishes in the Upper Coosa River system in the southeastern United States, we assessed the sensitivity of prioritization to the use of differing but plausible criteria for culvert impassability. Using measurements at 256 road crossings, we assessed culvert impassability using four alternative criteria sets represented in Bayesian belief networks. Two criteria sets scored culverts as either passable or impassable based on alternative thresholds of culvert characteristics (outlet elevation, baseflow water velocity). Two additional criteria sets incorporated uncertainty concerning ability of small-bodied fishes to pass through culverts and estimated a probability of culvert impassability. To prioritize culverts for remediation, we combined estimated culvert impassability with culvert position in the stream network relative to other barriers to compute prospective gain in connected stream habitat for the target fish species. Although four culverts ranked highly for remediation regardless of which criteria were used to assess impassability, other culverts differed widely in priority depending on criteria. Our results emphasize the value of explicitly incorporating uncertainty into criteria underlying remediation decisions. Comparing outcomes among alternative, plausible criteria may also help to identify research most needed to narrow management uncertainty.
Dealing with uncertainty when assessing fish passage through culvert road crossings
Anderson, Gregory B.; Freeman, Mary C.; Freeman, Byron J.; Straight, Carrie A.; Hagler, Megan M.; Peterson, James T.
2012-01-01
Assessing the passage of aquatic organisms through culvert road crossings has become increasingly common in efforts to restore stream habitat. Several federal and state agencies and local stakeholders have adopted assessment approaches based on literature-derived criteria for culvert impassability. However, criteria differ and are typically specific to larger-bodied fishes. In an analysis to prioritize culverts for remediation to benefit imperiled, small-bodied fishes in the Upper Coosa River system in the southeastern United States, we assessed the sensitivity of prioritization to the use of differing but plausible criteria for culvert impassability. Using measurements at 256 road crossings, we assessed culvert impassability using four alternative criteria sets represented in Bayesian belief networks. Two criteria sets scored culverts as either passable or impassable based on alternative thresholds of culvert characteristics (outlet elevation, baseflow water velocity). Two additional criteria sets incorporated uncertainty concerning ability of small-bodied fishes to pass through culverts and estimated a probability of culvert impassability. To prioritize culverts for remediation, we combined estimated culvert impassability with culvert position in the stream network relative to other barriers to compute prospective gain in connected stream habitat for the target fish species. Although four culverts ranked highly for remediation regardless of which criteria were used to assess impassability, other culverts differed widely in priority depending on criteria. Our results emphasize the value of explicitly incorporating uncertainty into criteria underlying remediation decisions. Comparing outcomes among alternative, plausible criteria may also help to identify research most needed to narrow management uncertainty.
Army Alternative Ground Fuels Qualification
2012-05-31
challenge for the Department, and the realities of global oil markets mean a disruption of oil supplies is plausible and increasingly likely in the...fuels at a premium for testing purposes, the Department will acquire such fuels for military operations at prices that are competitive with the market ...representation of TRL tests and evaluations. unclassified 12 Market Connection Manufacturing technology Fuel data, samples Market drivers
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Y.; Gupta, H.; Wagener, T.; Stewart, S.; Mahmoud, M.; Hartmann, H.; Springer, E.
2007-12-01
Some of the most challenging issues facing contemporary water resources management are those typified by complex coupled human-environmental systems with poorly characterized uncertainties. In other words, major decisions regarding water resources have to be made in the face of substantial uncertainty and complexity. It has been suggested that integrated models can be used to coherently assemble information from a broad set of domains, and can therefore serve as an effective means for tackling the complexity of environmental systems. Further, well-conceived scenarios can effectively inform decision making, particularly when high complexity and poorly characterized uncertainties make the problem intractable via traditional uncertainty analysis methods. This presentation discusses the integrated modeling framework adopted by SAHRA, an NSF Science & Technology Center, to investigate stakeholder-driven water sustainability issues within the semi-arid southwestern US. The multi-disciplinary, multi-resolution modeling framework incorporates a formal scenario approach to analyze the impacts of plausible (albeit uncertain) alternative futures to support adaptive management of water resources systems. Some of the major challenges involved in, and lessons learned from, this effort will be discussed.
HIV classification using the coalescent theory
Bulla, Ingo; Schultz, Anne-Kathrin; Schreiber, Fabian; Zhang, Ming; Leitner, Thomas; Korber, Bette; Morgenstern, Burkhard; Stanke, Mario
2010-01-01
Motivation: Existing coalescent models and phylogenetic tools based on them are not designed for studying the genealogy of sequences like those of HIV, since in HIV recombinants with multiple cross-over points between the parental strains frequently arise. Hence, ambiguous cases in the classification of HIV sequences into subtypes and circulating recombinant forms (CRFs) have been treated with ad hoc methods in lack of tools based on a comprehensive coalescent model accounting for complex recombination patterns. Results: We developed the program ARGUS that scores classifications of sequences into subtypes and recombinant forms. It reconstructs ancestral recombination graphs (ARGs) that reflect the genealogy of the input sequences given a classification hypothesis. An ARG with maximal probability is approximated using a Markov chain Monte Carlo approach. ARGUS was able to distinguish the correct classification with a low error rate from plausible alternative classifications in simulation studies with realistic parameters. We applied our algorithm to decide between two recently debated alternatives in the classification of CRF02 of HIV-1 and find that CRF02 is indeed a recombinant of Subtypes A and G. Availability: ARGUS is implemented in C++ and the source code is available at http://gobics.de/software Contact: ibulla@uni-goettingen.de Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:20400454
Scenario-neutral Food Security Risk Assessment: A livestock Heat Stress Case Study
NASA Astrophysics Data System (ADS)
Broman, D.; Rajagopalan, B.; Hopson, T. M.
2015-12-01
Food security risk assessments can provide decision-makers with actionable information to identify critical system limitations, and alternatives to mitigate the impacts of future conditions. The majority of current risk assessments have been scenario-led and results are limited by the scenarios - selected future states of the world's climate system and socioeconomic factors. A generic scenario-neutral framework for food security risk assessments is presented here that uses plausible states of the world without initially assigning likelihoods. Measures of system vulnerabilities are identified and system risk is assessed for these states. This framework has benefited greatly by research in the water and natural resource fields to adapt their planning to provide better risk assessments. To illustrate the utility of this framework we develop a case study using livestock heat stress risk within the pastoral system of West Africa. Heat stress can have a major impact not only on livestock owners, but on the greater food production system, decreasing livestock growth, milk production, and reproduction, and in severe cases, death. A heat stress index calculated from daily weather is used as a vulnerability measure and is computed from historic daily weather data at several locations in the study region. To generate plausible states, a stochastic weather generator is developed to generate synthetic weather sequences at each location, consistent with the seasonal climate. A spatial model of monthly and seasonal heat stress provide projections of current and future livestock heat stress measures across the study region, and can incorporate in seasonal climate and other external covariates. These models, when linked with empirical thresholds of heat stress risk for specific breeds offer decision-makers with actionable information for use in near-term warning systems as well as for future planning. Future assessment can indicate under which states livestock are at greatest risk of heat stress; when coupled with assessments of additional measures (e.g. water and fodder availability) can inform on alternatives that provide satisfactory performance under a wide range of states (e.g. optimal cattle breed, supplemental feed, increased water access).
Image segmentation using local shape and gray-level appearance models
NASA Astrophysics Data System (ADS)
Seghers, Dieter; Loeckx, Dirk; Maes, Frederik; Suetens, Paul
2006-03-01
A new generic model-based segmentation scheme is presented, which can be trained from examples akin to the Active Shape Model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Because in the ASM approach the intensity and shape models are typically applied alternately during optimizing as first an optimal target location is selected for each landmark separately based on local gray-level appearance information only to which the shape model is fitted subsequently, the ASM may be misled in case of wrongly selected landmark locations. Instead, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized non-iteratively using dynamic programming which allows to find the optimal landmark positions using combined shape and intensity information, without the need for initialization.
An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling.
Kane, Patrick; Zollman, Kevin J S
2015-01-01
The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the "hybrid equilibrium," to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith's Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory.
Photosynthetic approaches to chemical biotechnology.
Desai, Shuchi H; Atsumi, Shota
2013-12-01
National interest and environmental advocates encourage alternatives to petroleum-based products. Besides biofuels, many other valuable chemicals used in every-day life are petroleum derivatives or require petroleum for their production. A plausible alternative to production using petroleum for chemical production is to harvest the abundant carbon dioxide resources in the environment to produce valuable hydrocarbons. Currently, efforts are being made to utilize a natural biological system, photosynthetic microorganisms, to perform this task. Photosynthetic microorganisms are attractive to use for biochemical production because they utilize economical resources for survival: sunlight and carbon dioxide. This review examines the various compounds produced by photosynthetic microorganisms. Copyright © 2013 Elsevier Ltd. All rights reserved.
A recurrent network mechanism of time integration in perceptual decisions.
Wong, Kong-Fatt; Wang, Xiao-Jing
2006-01-25
Recent physiological studies using behaving monkeys revealed that, in a two-alternative forced-choice visual motion discrimination task, reaction time was correlated with ramping of spike activity of lateral intraparietal cortical neurons. The ramping activity appears to reflect temporal accumulation, on a timescale of hundreds of milliseconds, of sensory evidence before a decision is reached. To elucidate the cellular and circuit basis of such integration times, we developed and investigated a simplified two-variable version of a biophysically realistic cortical network model of decision making. In this model, slow time integration can be achieved robustly if excitatory reverberation is primarily mediated by NMDA receptors; our model with only fast AMPA receptors at recurrent synapses produces decision times that are not comparable with experimental observations. Moreover, we found two distinct modes of network behavior, in which decision computation by winner-take-all competition is instantiated with or without attractor states for working memory. Decision process is closely linked to the local dynamics, in the "decision space" of the system, in the vicinity of an unstable saddle steady state that separates the basins of attraction for the two alternative choices. This picture provides a rigorous and quantitative explanation for the dependence of performance and response time on the degree of task difficulty, and the reason for which reaction times are longer in error trials than in correct trials as observed in the monkey experiment. Our reduced two-variable neural model offers a simple yet biophysically plausible framework for studying perceptual decision making in general.
A two component model for thermal emission from organic grains in Comet Halley
NASA Technical Reports Server (NTRS)
Chyba, Christopher; Sagan, Carl
1988-01-01
Observations of Comet Halley in the near infrared reveal a triple-peaked emission feature near 3.4 micrometer, characteristic of C-H stretching in hydrocarbons. A variety of plausible cometary materials exhibit these features, including the organic residue of irradiated candidate cometary ices (such as the residue of irradiated methane ice clathrate, and polycyclic aromatic hydrocarbons. Indeed, any molecule containing -CH3 and -CH2 alkanes will emit at 3.4 micrometer under suitable conditions. Therefore tentative identifications must rest on additional evidence, including a plausible account of the origins of the organic material, a plausible model for the infrared emission of this material, and a demonstration that this conjunction of material and model not only matches the 3 to 4 micrometer spectrum, but also does not yield additional emission features where none is observed. In the case of the residue of irradiated low occupancy methane ice clathrate, it is argued that the lab synthesis of the organic residue well simulates the radiation processing experienced by Comet Halley.
Comment on “Reconciliation of the Devils Hole climate record with orbital forcing”
Winograd, Isaac J.
2016-01-01
Moseley et al. (Reports, 8 January 2016, p. 165) postulate an increase in dissolved thorium isotope 230Th with depth below the water table as the explanation for the differing ages of Termination II. Flow of geothermal water through the Devils Hole caverns precludes this explanation. Deposition of younger secondary calcite into the initial porosity of the calcite comprising their cores is a plausible alternate explanation.
Comment on “Reconciliation of the Devils Hole climate record with orbital forcing”
NASA Astrophysics Data System (ADS)
Winograd, Isaac J.
2016-10-01
Moseley et al. (Reports, 8 January 2016, p. 165) postulate an increase in dissolved thorium isotope 230Th with depth below the water table as the explanation for the differing ages of Termination II. Flow of geothermal water through the Devils Hole caverns precludes this explanation. Deposition of younger secondary calcite into the initial porosity of the calcite comprising their cores is a plausible alternate explanation.
Time as a dimension of medical law.
Harrington, John
2012-01-01
This paper considers the importance of temporal categories in medical law argumentation. Proceeding from a view of time as plural, rhetorical, and socially produced, it argues that decision making in areas such as the access of minors to contraception, abortion law, end of life care, and emergency caesarian sections can be usefully read as struggles over appropriate time frames. Judges, legislators, and commentators seek to establish the plausibility of a given legal development with reference to the common sense understanding of time which it embodies. Such understandings may be plausible because of their resonance with the diverse temporalities of the law itself. Alternatively, they may reproduce the temporal frames proper to medical science. Not only is time represented in medical law rhetoric, but deliberation in such cases is also subject to temporal pressures which may significantly affect their outcome. The paper concludes by considering the broader political stakes of intertemporal struggles in medical law.
Sofaer, Neema
2014-01-01
A common reason for giving research participants post-trial access (PTA) to the trial intervention appeals to reciprocity, the principle, stated most generally, that if one person benefits a second, the second should reciprocate: benefit the first in return. Many authors consider it obvious that reciprocity supports PTA. Yet their reciprocity principles differ, with many authors apparently unaware of alternative versions. This article is the first to gather the range of reciprocity principles. It finds that: (1) most are false. (2) The most plausible principle, which is also problematic, applies only when participants experience significant net risks or burdens. (3) Seldom does reciprocity support PTA for participants or give researchers stronger reason to benefit participants than equally needy non-participants. (4) Reciprocity fails to explain the common view that it is bad when participants in a successful trial have benefited from the trial intervention but lack PTA to it. PMID:24602060
Biologically Plausible, Human-Scale Knowledge Representation.
Crawford, Eric; Gingerich, Matthew; Eliasmith, Chris
2016-05-01
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, ), "mesh" binding (van der Velde & de Kamps, ), and conjunctive binding (Smolensky, ). Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic Pointer Architecture (SPA) approach to modeling cognition (Eliasmith, ) do scale appropriately. Specifically, we construct a spiking neural network of about 2.5 million neurons that employs semantic pointers to successfully encode and decode the main lexical relations in WordNet, which has over 100,000 terms. In addition, we show that the same representations can be employed to construct recursively structured sentences consisting of arbitrary WordNet concepts, while preserving the original lexical structure. We argue that these results suggest that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition. Copyright © 2015 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Lindstrøm, Ulf; Smout, Sophie; Howell, Daniel; Bogstad, Bjarte
2009-10-01
The Barents Sea ecosystem, one of the most productive and commercially important ecosystems in the world, has experienced major fluctuations in species abundance the past five decades. Likely causes are natural variability, climate change, overfishing and predator-prey interactions. In this study, we use an age-length structured multi-species model (Gadget, Globally applicable Area-Disaggregated General Ecosystem Toolbox) to analyse the historic population dynamics of major fish and marine mammal species in the Barents Sea. The model was used to examine possible effects of a number of plausible biological and fisheries scenarios. The results suggest that changes in cod mortality from fishing or cod cannibalism levels have the largest effect on the ecosystem, while changes to the capelin fishery have had only minor effects. Alternate whale migration scenarios had only a moderate impact on the modelled ecosystem. Indirect effects are seen to be important, with cod fishing pressure, cod cannibalism and whale predation on cod having an indirect impact on capelin, emphasising the importance of multi-species modelling in understanding and managing ecosystems. Models such as the one presented here provide one step towards an ecosystem-based approach to fisheries management.
A Biomass-based Model to Estimate the Plausibility of Exoplanet Biosignature Gases
NASA Astrophysics Data System (ADS)
Seager, S.; Bains, W.; Hu, R.
2013-10-01
Biosignature gas detection is one of the ultimate future goals for exoplanet atmosphere studies. We have created a framework for linking biosignature gas detectability to biomass estimates, including atmospheric photochemistry and biological thermodynamics. The new framework is intended to liberate predictive atmosphere models from requiring fixed, Earth-like biosignature gas source fluxes. New biosignature gases can be considered with a check that the biomass estimate is physically plausible. We have validated the models on terrestrial production of NO, H2S, CH4, CH3Cl, and DMS. We have applied the models to propose NH3 as a biosignature gas on a "cold Haber World," a planet with a N2-H2 atmosphere, and to demonstrate why gases such as CH3Cl must have too large of a biomass to be a plausible biosignature gas on planets with Earth or early-Earth-like atmospheres orbiting a Sun-like star. To construct the biomass models, we developed a functional classification of biosignature gases, and found that gases (such as CH4, H2S, and N2O) produced from life that extracts energy from chemical potential energy gradients will always have false positives because geochemistry has the same gases to work with as life does, and gases (such as DMS and CH3Cl) produced for secondary metabolic reasons are far less likely to have false positives but because of their highly specialized origin are more likely to be produced in small quantities. The biomass model estimates are valid to one or two orders of magnitude; the goal is an independent approach to testing whether a biosignature gas is plausible rather than a precise quantification of atmospheric biosignature gases and their corresponding biomasses.
Frerichs, Leah M; Araz, Ozgur M; Huang, Terry T-K
2013-01-01
Research evidence indicates that obesity has spread through social networks, but lever points for interventions based on overlapping networks are not well studied. The objective of our research was to construct and parameterize a system dynamics model of the social transmission of behaviors through adult and youth influence in order to explore hypotheses and identify plausible lever points for future childhood obesity intervention research. Our objectives were: (1) to assess the sensitivity of childhood overweight and obesity prevalence to peer and adult social transmission rates, and (2) to test the effect of combinations of prevention and treatment interventions on the prevalence of childhood overweight and obesity. To address the first objective, we conducted two-way sensitivity analyses of adult-to-child and child-to-child social transmission in relation to childhood overweight and obesity prevalence. For the second objective, alternative combinations of prevention and treatment interventions were tested by varying model parameters of social transmission and weight loss behavior rates. Our results indicated child overweight and obesity prevalence might be slightly more sensitive to the same relative change in the adult-to-child compared to the child-to-child social transmission rate. In our simulations, alternatives with treatment alone, compared to prevention alone, reduced the prevalence of childhood overweight and obesity more after 10 years (1.2-1.8% and 0.2-1.0% greater reduction when targeted at children and adults respectively). Also, as the impact of adult interventions on children was increased, the rank of six alternatives that included adults became better (i.e., resulting in lower 10 year childhood overweight and obesity prevalence) than alternatives that only involved children. The findings imply that social transmission dynamics should be considered when designing both prevention and treatment intervention approaches. Finally, targeting adults may be more efficient, and research should strengthen and expand adult-focused interventions that have a high residual impact on children.
Frerichs, Leah M.; Araz, Ozgur M.; Huang, Terry T. – K.
2013-01-01
Research evidence indicates that obesity has spread through social networks, but lever points for interventions based on overlapping networks are not well studied. The objective of our research was to construct and parameterize a system dynamics model of the social transmission of behaviors through adult and youth influence in order to explore hypotheses and identify plausible lever points for future childhood obesity intervention research. Our objectives were: (1) to assess the sensitivity of childhood overweight and obesity prevalence to peer and adult social transmission rates, and (2) to test the effect of combinations of prevention and treatment interventions on the prevalence of childhood overweight and obesity. To address the first objective, we conducted two-way sensitivity analyses of adult-to-child and child-to-child social transmission in relation to childhood overweight and obesity prevalence. For the second objective, alternative combinations of prevention and treatment interventions were tested by varying model parameters of social transmission and weight loss behavior rates. Our results indicated child overweight and obesity prevalence might be slightly more sensitive to the same relative change in the adult-to-child compared to the child-to-child social transmission rate. In our simulations, alternatives with treatment alone, compared to prevention alone, reduced the prevalence of childhood overweight and obesity more after 10 years (1.2–1.8% and 0.2–1.0% greater reduction when targeted at children and adults respectively). Also, as the impact of adult interventions on children was increased, the rank of six alternatives that included adults became better (i.e., resulting in lower 10 year childhood overweight and obesity prevalence) than alternatives that only involved children. The findings imply that social transmission dynamics should be considered when designing both prevention and treatment intervention approaches. Finally, targeting adults may be more efficient, and research should strengthen and expand adult-focused interventions that have a high residual impact on children. PMID:24358234
Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.
Calvin, Nicholas T; J McDowell, J
2015-11-01
For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Quinn, Meghan E; Grant, Kathryn E; Adam, Emma K
2018-03-01
When exposed to stressful life events, a significant number of adolescents will experience depressive symptoms. One model of depression suggests that individuals with a negative cognitive style are most vulnerable to depression following life stress. Alternatively, altered activation of the hypothalamic-pituitary-adrenal axis may explain vulnerability to depression following life stress. Each of these models plausibly explains the emergence of depressive symptoms during adolescence and have been investigated largely independently. The current study recruited a sample of urban adolescents (N = 179) to evaluate whether cortisol response to a laboratory stress induction and negative cognitive style are related and whether they independently interact with exposure to stressful life events to predict symptoms of depression. Negative cognitive style was not associated with cortisol response to the laboratory stressor. Rather, negative cognitive style and cortisol recovery independently interacted with stressful life events to predict current symptoms of depression. Results support a heterogeneous etiology of depression.
Formal Models of the Network Co-occurrence Underlying Mental Operations.
Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand
2016-06-01
Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition.
Formal Models of the Network Co-occurrence Underlying Mental Operations
Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand
2016-01-01
Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition. PMID:27310288
The population genetics of the alpha-2 globin locus of orangutans (Pongo pygmaeus).
Steiper, Michael E; Wolfe, Nathan D; Karesh, William B; Kilbourn, Annelisa M; Bosi, Edwin J; Ruvolo, Maryellen
2005-03-01
In this study, the molecular population genetics of the orangutan's alpha-2 globin (HBA2) gene were investigated in order to test for the action of natural selection. Haplotypes from 28 orangutan chromosomes were collected from a 1.46-kilobase region of the alpha-2 globin locus. While many aspects of the data were consistent with neutrality, the observed heterogeneous distribution of polymorphisms was inconsistent with neutral expectations. Furthermore, a single amino acid variant, found in both the Bornean and the Sumatran orangutan subspecies, was associated with different alternative synonymous variants in each subspecies, suggesting that the allele may have spread separately through the two subspecies after two distinct origination events. This variant is not in Hardy-Weinberg equilibrium (HWE). These observations are consistent with neutral models that incorporate population structure and models that invoke selection. The orangutan Plasmodium parasite is a plausible selective agent that may underlie the variation at alpha-2 globin in orangutans.
Uncertainty analysis of least-cost modeling for designing wildlife linkages.
Beier, Paul; Majka, Daniel R; Newell, Shawn L
2009-12-01
Least-cost models for focal species are widely used to design wildlife corridors. To evaluate the least-cost modeling approach used to develop 15 linkage designs in southern California, USA, we assessed robustness of the largest and least constrained linkage. Species experts parameterized models for eight species with weights for four habitat factors (land cover, topographic position, elevation, road density) and resistance values for each class within a factor (e.g., each class of land cover). Each model produced a proposed corridor for that species. We examined the extent to which uncertainty in factor weights and class resistance values affected two key conservation-relevant outputs, namely, the location and modeled resistance to movement of each proposed corridor. To do so, we compared the proposed corridor to 13 alternative corridors created with parameter sets that spanned the plausible ranges of biological uncertainty in these parameters. Models for five species were highly robust (mean overlap 88%, little or no increase in resistance). Although the proposed corridors for the other three focal species overlapped as little as 0% (mean 58%) of the alternative corridors, resistance in the proposed corridors for these three species was rarely higher than resistance in the alternative corridors (mean difference was 0.025 on a scale of 1 10; worst difference was 0.39). As long as the model had the correct rank order of resistance values and factor weights, our results suggest that the predicted corridor is robust to uncertainty. The three carnivore focal species, alone or in combination, were not effective umbrellas for the other focal species. The carnivore corridors failed to overlap the predicted corridors of most other focal species and provided relatively high resistance for the other focal species (mean increase of 2.7 resistance units). Least-cost modelers should conduct uncertainty analysis so that decision-makers can appreciate the potential impact of model uncertainty on conservation decisions. Our approach to uncertainty analysis (which can be called a worst-case scenario approach) is appropriate for complex models in which distribution of the input parameters cannot be specified.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Testing adaptive toolbox models: a Bayesian hierarchical approach.
Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan
2013-01-01
Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.
NASA Technical Reports Server (NTRS)
Murray, William R.
1990-01-01
An approach is described to student modeling for intelligent tutoring systems based on an explicit representation of the tutor's beliefs about the student and the arguments for and against those beliefs (called endorsements). A lexicographic comparison of arguments, sorted according to evidence reliability, provides a principled means of determining those beliefs that are considered true, false, or uncertain. Each of these beliefs is ultimately justified by underlying assessment data. The endorsement-based approach to student modeling is particularly appropriate for tutors controlled by instructional planners. These tutors place greater demands on a student model than opportunistic tutors. Numerical calculi approaches are less well-suited because it is difficult to correctly assign numbers for evidence reliability and rule plausibility. It may also be difficult to interpret final results and provide suitable combining functions. When numeric measures of uncertainty are used, arbitrary numeric thresholds are often required for planning decisions. Such an approach is inappropriate when robust context-sensitive planning decisions must be made. A TMS-based implementation of the endorsement-based approach to student modeling is presented, this approach is compared to alternatives, and a project history is provided describing the evolution of this approach.
Handbook of Forecasting Techniques. Part 2. Description of 31 Techniques
1977-08-01
a discipline, or some other coherent group. Panels have often produced good results, but care must be taken to avoid bandwagon effects , blockage of...34 bandwagon " effect often occurs in panels, so that one person’s viewpoint overwhelms the opinions of others and/or plausible alternatives never get proper...as an ancient one, however. Since Newton, the western world has increasingly acknowledged the universality of cause- effect explanations, with cause
Future Battles and the Development of Military Concepts
2013-08-22
Land Battle concept dating from the Cold War era. The author maintains tliat such an approach is tied to old ways of thinking; the world has changed...the current world economic and social state, along with anticipated future flash points around the globe; a new military operational concept titled...project power, let alone rival U.S. dominance on the high seas. An alternate and more plausible future is a world that will require frequent
Quark matter or new particles?
NASA Technical Reports Server (NTRS)
Michel, F. Curtis
1988-01-01
It has been argued that compression of nuclear matter to somewhat higher densities may lead to the formation of stable quark matter. A plausible alternative, which leads to radically new astrophysical scenarios, is that the stability of quark matter simply represents the stability of new particles compounded of quarks. A specific example is the SU(3)-symmetric version of the alpha particle, composed of spin-zero pairs of each of the baryon octet (an 'octet' particle).
Laszlo, Sarah; Plaut, David C
2012-03-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.
An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling
Kane, Patrick; Zollman, Kevin J. S.
2015-01-01
The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the “hybrid equilibrium,” to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith’s Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory. PMID:26348617
Is evaluating complementary and alternative medicine equivalent to evaluating the absurd?
Greasley, Pete
2010-06-01
Complementary and alternative therapies such as reflexology and acupuncture have been the subject of numerous evaluations, clinical trials, and systematic reviews, yet the empirical evidence in support of their efficacy remains equivocal. The empirical evaluation of a therapy would normally assume a plausible rationale regarding the mechanism of action. However, examination of the historical background and underlying principles for reflexology, iridology, acupuncture, auricular acupuncture, and some herbal medicines, reveals a rationale founded on the principle of analogical correspondences, which is a common basis for magical thinking and pseudoscientific beliefs such as astrology and chiromancy. Where this is the case, it is suggested that subjecting these therapies to empirical evaluation may be tantamount to evaluating the absurd.
New methods in hydrologic modeling and decision support for culvert flood risk under climate change
NASA Astrophysics Data System (ADS)
Rosner, A.; Letcher, B. H.; Vogel, R. M.; Rees, P. S.
2015-12-01
Assessing culvert flood vulnerability under climate change poses an unusual combination of challenges. We seek a robust method of planning for an uncertain future, and therefore must consider a wide range of plausible future conditions. Culverts in our case study area, northwestern Massachusetts, USA, are predominantly found in small, ungaged basins. The need to predict flows both at numerous sites and under numerous plausible climate conditions requires a statistical model with low data and computational requirements. We present a statistical streamflow model that is driven by precipitation and temperature, allowing us to predict flows without reliance on reference gages of observed flows. The hydrological analysis is used to determine each culvert's risk of failure under current conditions. We also explore the hydrological response to a range of plausible future climate conditions. These results are used to determine the tolerance of each culvert to future increases in precipitation. In a decision support context, current flood risk as well as tolerance to potential climate changes are used to provide a robust assessment and prioritization for culvert replacements.
Moore, C.T.; Conroy, M.J.
2006-01-01
Stochastic and structural uncertainties about forest dynamics present challenges in the management of ephemeral habitat conditions for endangered forest species. Maintaining critical foraging and breeding habitat for the endangered red-cockaded woodpecker (Picoides borealis) requires an uninterrupted supply of old-growth forest. We constructed and optimized a dynamic forest growth model for the Piedmont National Wildlife Refuge (Georgia, USA) with the objective of perpetuating a maximum stream of old-growth forest habitat. Our model accommodates stochastic disturbances and hardwood succession rates, and uncertainty about model structure. We produced a regeneration policy that was indexed by current forest state and by current weight of evidence among alternative model forms. We used adaptive stochastic dynamic programming, which anticipates that model probabilities, as well as forest states, may change through time, with consequent evolution of the optimal decision for any given forest state. In light of considerable uncertainty about forest dynamics, we analyzed a set of competing models incorporating extreme, but plausible, parameter values. Under any of these models, forest silviculture practices currently recommended for the creation of woodpecker habitat are suboptimal. We endorse fully adaptive approaches to the management of endangered species habitats in which predictive modeling, monitoring, and assessment are tightly linked.
Iijima, Takatoshi; Hidaka, Chiharu; Iijima, Yoko
2016-08-01
Alternative pre-mRNA splicing is a fundamental mechanism that generates molecular diversity from a single gene. In the central nervous system (CNS), key neural developmental steps are thought to be controlled by alternative splicing decisions, including the molecular diversity underlying synaptic wiring, plasticity, and remodeling. Significant progress has been made in understanding the molecular mechanisms and functions of alternative pre-mRNA splicing in neurons through studies in invertebrate systems; however, recent studies have begun to uncover the potential role of neuronal alternative splicing in the mammalian CNS. This article provides an overview of recent findings regarding the regulation and function of neuronal alternative splicing. In particular, we focus on the spatio-temporal regulation of neurexin, a synaptic adhesion molecule, by neuronal cell type-specific factors and neuronal activity, which are thought to be especially important for characterizing neural development and function within the mammalian CNS. Notably, there is increasing evidence that implicates the dysregulation of neuronal splicing events in several neurological disorders. Therefore, understanding the detailed mechanisms of neuronal alternative splicing in the mammalian CNS may provide plausible treatment strategies for these diseases. Copyright © 2016 Elsevier Ireland Ltd and Japan Neuroscience Society. All rights reserved.
Counterfactual thinking in patients with amnesia
Mullally, Sinéad L; Maguire, Eleanor A
2014-01-01
We often engage in counterfactual (CF) thinking, which involves reflecting on “what might have been.” Creating alternative versions of reality seems to have parallels with recollecting the past and imagining the future in requiring the simulation of internally generated models of complex events. Given that episodic memory and imagining the future are impaired in patients with hippocampal damage and amnesia, we wondered whether successful CF thinking also depends upon the integrity of the hippocampus. Here using two nonepisodic CF thinking tasks, we found that patients with bilateral hippocampal damage and amnesia performed comparably with matched controls. They could deconstruct reality, add in and recombine elements, change relations between temporal sequences of events, enabling them to determine plausible alternatives of complex episodes. A difference between the patients and control participants was evident, however, in the patients' subtle avoidance of CF simulations that required the construction of an internal spatial representation. Overall, our findings suggest that mental simulation in the form of nonepisodic CF thinking does not seem to depend upon the hippocampus unless there is the added requirement for construction of a coherent spatial scene within which to play out scenarios. © 2014 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:24978690
Counterfactual thinking in patients with amnesia.
Mullally, Sinéad L; Maguire, Eleanor A
2014-11-01
We often engage in counterfactual (CF) thinking, which involves reflecting on "what might have been." Creating alternative versions of reality seems to have parallels with recollecting the past and imagining the future in requiring the simulation of internally generated models of complex events. Given that episodic memory and imagining the future are impaired in patients with hippocampal damage and amnesia, we wondered whether successful CF thinking also depends upon the integrity of the hippocampus. Here using two nonepisodic CF thinking tasks, we found that patients with bilateral hippocampal damage and amnesia performed comparably with matched controls. They could deconstruct reality, add in and recombine elements, change relations between temporal sequences of events, enabling them to determine plausible alternatives of complex episodes. A difference between the patients and control participants was evident, however, in the patients' subtle avoidance of CF simulations that required the construction of an internal spatial representation. Overall, our findings suggest that mental simulation in the form of nonepisodic CF thinking does not seem to depend upon the hippocampus unless there is the added requirement for construction of a coherent spatial scene within which to play out scenarios. Copyright © 2014 THE AUTHORS. HIPPOCAMPUS PUBLISHED BY WILEY PERIODICALS, INC.
Liberal rationalism and medical decision-making.
Savulescu, Julian
1997-04-01
I contrast Robert Veatch's recent liberal vision of medical decision-making with a more rationalist liberal model. According to Veatch, physicians are biased in their determination of what is in their patient's overall interests in favour of their medical interests. Because of the extent of this bias, we should abandon the practice of physicians offering what they guess to be the best treatment option. Patients should buddy up with physicians who share the same values -- 'deep value pairing'. The goal of choice is maximal promotion of patient values. I argue that if subjectivism about value and valuing is true, this move is plausible. However, if objectivism about value is true -- that there really are states which are good for people regardless of whether they desire to be in them -- then we should accept a more rationalist liberal alternative. According to this alternative, what is required to decide which course is best is rational dialogue between physicians and patients, both about the patient's circumstances and her values, and not the seeking out of people, physicians or others, who share the same values. Rational discussion requires that physicians be reasonable and empathic. I describe one possible account of a reasonable physician.
NASA Astrophysics Data System (ADS)
Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark
2018-04-01
Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.
Interaction in Spoken Word Recognition Models: Feedback Helps.
Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D
2018-01-01
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.
Interaction in Spoken Word Recognition Models: Feedback Helps
Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.
2018-01-01
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593
Flood hydrology for Dry Creek, Lake County, Northwestern Montana
Parrett, C.; Jarrett, R.D.
2004-01-01
Dry Creek drains about 22.6 square kilometers of rugged mountainous terrain upstream from Tabor Dam in the Mission Range near St. Ignatius, Montana. Because of uncertainty about plausible peak discharges and concerns regarding the ability of the Tabor Dam spillway to safely convey these discharges, the flood hydrology for Dry Creek was evaluated on the basis of three hydrologic and geologic methods. The first method involved determining an envelope line relating flood discharge to drainage area on the basis of regional historical data and calculating a 500-year flood for Dry Creek using a regression equation. The second method involved paleoflood methods to estimate the maximum plausible discharge for 35 sites in the study area. The third method involved rainfall-runoff modeling for the Dry Creek basin in conjunction with regional precipitation information to determine plausible peak discharges. All of these methods resulted in estimates of plausible peak discharges that are substantially less than those predicted by the more generally applied probable maximum flood technique. Copyright ASCE 2004.
Control of finite critical behaviour in a small-scale social system
NASA Astrophysics Data System (ADS)
Daniels, Bryan C.; Krakauer, David C.; Flack, Jessica C.
2017-02-01
Many adaptive systems sit near a tipping or critical point. For systems near a critical point small changes to component behaviour can induce large-scale changes in aggregate structure and function. Criticality can be adaptive when the environment is changing, but entails reduced robustness through sensitivity. This tradeoff can be resolved when criticality can be tuned. We address the control of finite measures of criticality using data on fight sizes from an animal society model system (Macaca nemestrina, n=48). We find that a heterogeneous, socially organized system, like homogeneous, spatial systems (flocks and schools), sits near a critical point; the contributions individuals make to collective phenomena can be quantified; there is heterogeneity in these contributions; and distance from the critical point (DFC) can be controlled through biologically plausible mechanisms exploiting heterogeneity. We propose two alternative hypotheses for why a system decreases the distance from the critical point.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2016-06-01
Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.
Miconi, Thomas
2017-01-01
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
NASA Astrophysics Data System (ADS)
Ćirković, Milan M.; Perović, Slobodan
2018-05-01
We historically trace various non-conventional explanations for the origin of the cosmic microwave background and discuss their merit, while analyzing the dynamics of their rejection, as well as the relevant physical and methodological reasons for it. It turns out that there have been many such unorthodox interpretations; not only those developed in the context of theories rejecting the relativistic ("Big Bang") paradigm entirely (e.g., by Alfvén, Hoyle and Narlikar) but also those coming from the camp of original thinkers firmly entrenched in the relativistic milieu (e.g., by Rees, Ellis, Rowan-Robinson, Layzer and Hively). In fact, the orthodox interpretation has only incrementally won out against the alternatives over the course of the three decades of its multi-stage development. While on the whole, none of the alternatives to the hot Big Bang scenario is persuasive today, we discuss the epistemic ramifications of establishing orthodoxy and eliminating alternatives in science, an issue recently discussed by philosophers and historians of science for other areas of physics. Finally, we single out some plausible and possibly fruitful ideas offered by the alternatives.
Selection, calibration, and validation of models of tumor growth.
Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C
2016-11-01
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.
Rubincam, Clara
2017-04-01
This article highlights how African men and women in South Africa account for the plausibility of alternative beliefs about the origins of HIV and the existence of a cure. This study draws on the notion of a "street-level epistemology of trust"-knowledge generated by individuals through their everyday observations and experiences-to account for individuals' trust or mistrust of official claims versus alternative explanations about HIV and AIDS. Focus group respondents describe how past experiences, combined with observations about the power of scientific developments and perceptions of disjunctures in information, fuel their uncertainty and skepticism about official claims. HIV prevention campaigns may be strengthened by drawing on experiential aspects of HIV and AIDS to lend credibility to scientific claims, while recognizing that some doubts about the trustworthiness of scientific evidence are a form of skeptical engagement rather than of outright rejection.
NASA Astrophysics Data System (ADS)
Hay, C.; Creveling, J. R.; Huybers, P. J.
2016-12-01
Excursions in the stable carbon isotopic composition of carbonate rocks (δ13Ccarb) can facilitate correlation of Precambrian and Phanerozoic sedimentary successions at a higher temporal resolution than radiometric and biostratigraphic frameworks typically afford. Within the bounds of litho- and biostratigraphic constraints, stratigraphers often correlate isotopic patterns between distant stratigraphic sections through visual alignment of local maxima and minima of isotopic values. The reproducibility of this method can prove challenging and, thus, evaluating the statistical robustness of intrabasinal composite carbon isotope curves, and global correlations to these reference curves, remains difficult. To assess the reproducibility of stratigraphic alignment of δ13Ccarb data, and correlations between carbon isotope excursions, we employ a numerical dynamic time warping methodology that stretches and squeezes the time axis of a record to obtain an optimal correlation (in a least-squares sense) between time-uncertain series of data. In particular, we assess various alignments between series of Early Cambrian δ13Ccarb data with respect to plausible matches. We first show that an alignment of these records obtained visually, and published previously, is broadly reproducible using dynamic time warping. Alternative alignments with similar goodness of fits are also obtainable, and their stratigraphic plausibility are discussed. This approach should be generalizable to an algorithm for the purposes of developing a library of plausible alignments between multiple time-uncertain stratigraphic records.
The association between caffeine and cognitive decline: examining alternative causal hypotheses.
Ritchie, K; Ancelin, M L; Amieva, H; Rouaud, O; Carrière, I
2014-04-01
Numerous studies suggest that higher coffee consumption may reduce the rate of aging-related cognitive decline in women. It is thus potentially a cheap and widely available candidate for prevention programs provided its mechanism may be adequately understood. The assumed effect is that of reduced amyloid deposition, however, alternative pathways notably by reducing depression and diabetes type 2 risk have not been considered. A population study of 1,193 elderly persons examining depressive symptomatology, caffeine consumption, fasting glucose levels, type 2 diabetes onset, serum amyloid, and factors known to affect cognitive performance was used to explore alternative causal models. Higher caffeine consumption was found to be associated with decreased risk of incident diabetes in men (HR = 0.64; 95% CI 0.42-0.97) and increased risk in women (HR = 1.51; 95% CI 1.08-2.11). No association was found with incident depression. While in the total sample lower ratio Aβ42/Aβ40 levels (OR = 1.36, 95% CI 1.05-1.77, p = 0.02) were found in high caffeine consumers, this failed to reach significance when the analyses were stratified by gender. We found no evidence that reduced risk of cognitive decline in women with high caffeine consumption is moderated or confounded by diabetes or depression. The evidence of an association with plasma beta amyloid could not be clearly demonstrated. Insufficient proof of causal mechanisms currently precludes the recommendation of coffee consumption as a public health measure. Further research should focus on the high estrogen content of coffee as a plausible alternative explanation.
Spinal circuits can accommodate interaction torques during multijoint limb movements.
Buhrmann, Thomas; Di Paolo, Ezequiel A
2014-01-01
The dynamic interaction of limb segments during movements that involve multiple joints creates torques in one joint due to motion about another. Evidence shows that such interaction torques are taken into account during the planning or control of movement in humans. Two alternative hypotheses could explain the compensation of these dynamic torques. One involves the use of internal models to centrally compute predicted interaction torques and their explicit compensation through anticipatory adjustment of descending motor commands. The alternative, based on the equilibrium-point hypothesis, claims that descending signals can be simple and related to the desired movement kinematics only, while spinal feedback mechanisms are responsible for the appropriate creation and coordination of dynamic muscle forces. Partial supporting evidence exists in each case. However, until now no model has explicitly shown, in the case of the second hypothesis, whether peripheral feedback is really sufficient on its own for coordinating the motion of several joints while at the same time accommodating intersegmental interaction torques. Here we propose a minimal computational model to examine this question. Using a biomechanics simulation of a two-joint arm controlled by spinal neural circuitry, we show for the first time that it is indeed possible for the neuromusculoskeletal system to transform simple descending control signals into muscle activation patterns that accommodate interaction forces depending on their direction and magnitude. This is achieved without the aid of any central predictive signal. Even though the model makes various simplifications and abstractions compared to the complexities involved in the control of human arm movements, the finding lends plausibility to the hypothesis that some multijoint movements can in principle be controlled even in the absence of internal models of intersegmental dynamics or learned compensatory motor signals.
Model validity and frequency band selection in operational modal analysis
NASA Astrophysics Data System (ADS)
Au, Siu-Kui
2016-12-01
Experimental modal analysis aims at identifying the modal properties (e.g., natural frequencies, damping ratios, mode shapes) of a structure using vibration measurements. Two basic questions are encountered when operating in the frequency domain: Is there a mode near a particular frequency? If so, how much spectral data near the frequency can be included for modal identification without incurring significant modeling error? For data with high signal-to-noise (s/n) ratios these questions can be addressed using empirical tools such as singular value spectrum. Otherwise they are generally open and can be challenging, e.g., for modes with low s/n ratios or close modes. In this work these questions are addressed using a Bayesian approach. The focus is on operational modal analysis, i.e., with 'output-only' ambient data, where identification uncertainty and modeling error can be significant and their control is most demanding. The approach leads to 'evidence ratios' quantifying the relative plausibility of competing sets of modeling assumptions. The latter involves modeling the 'what-if-not' situation, which is non-trivial but is resolved by systematic consideration of alternative models and using maximum entropy principle. Synthetic and field data are considered to investigate the behavior of evidence ratios and how they should be interpreted in practical applications.
Sofaer, Neema
2014-11-01
A common reason for giving research participants post-trial access (PTA) to the trial intervention appeals to reciprocity, the principle, stated most generally, that if one person benefits a second, the second should reciprocate: benefit the first in return. Many authors consider it obvious that reciprocity supports PTA. Yet their reciprocity principles differ, with many authors apparently unaware of alternative versions. This article is the first to gather the range of reciprocity principles. It finds that: (1) most are false. (2) The most plausible principle, which is also problematic, applies only when participants experience significant net risks or burdens. (3) Seldom does reciprocity support PTA for participants or give researchers stronger reason to benefit participants than equally needy non-participants. (4) Reciprocity fails to explain the common view that it is bad when participants in a successful trial have benefited from the trial intervention but lack PTA to it. © 2013 John Wiley & Sons Ltd.
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
NASA Astrophysics Data System (ADS)
Wiebe, K.; Lotze-Campen, H.; Bodirsky, B.; Kavallari, A.; Mason-d'Croz, D.; van der Mensbrugghe, D.; Robinson, S.; Sands, R.; Tabeau, A.; Willenbockel, D.; Islam, S.; van Meijl, H.; Mueller, C.; Robertson, R.
2014-12-01
Previous studies have combined climate, crop and economic models to examine the impact of climate change on agricultural production and food security, but results have varied widely due to differences in models, scenarios and data. Recent work has examined (and narrowed) these differences through systematic model intercomparison using a high-emissions pathway to highlight the differences. New work extends that analysis to cover a range of plausible socioeconomic scenarios and emission pathways. Results from three general circulation models are combined with one crop model and five global economic models to examine the global and regional impacts of climate change on yields, area, production, prices and trade for coarse grains, rice, wheat, oilseeds and sugar to 2050. Results show that yield impacts vary with changes in population, income and technology as well as emissions, but are reduced in all cases by endogenous changes in prices and other variables.
Comparing supply and demand models for future photovoltaic power generation in the USA
Basore, Paul A.; Cole, Wesley J.
2018-02-22
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
Comparing supply and demand models for future photovoltaic power generation in the USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basore, Paul A.; Cole, Wesley J.
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
Advanced Extravehicular Activity Breakout Group Summary
NASA Technical Reports Server (NTRS)
Kosmo, Joseph J.; Perka, Alan; Walz, Carl; Cobb, Sharon; Hanford, Anthony; Eppler, Dean
2005-01-01
This viewgraph document summarizes the workings of the Advanced Extravehicular Activity (AEVA) Breakout group in a Martian environment. The group was tasked with: identifying potential contaminants and pathways for AEVA systems with respect to forward and backward contamination; identifying plausible mitigation alternatives and obstacles for pertinent missions; identifying topics that require further research and technology development and discuss development strategies with uncertain Planetary Protection (PP) requirements; Identifying PP requirements that impose the greatest mission/development costs; Identifying PP requirements/topics that require further definition;
Gestalt isomorphism and the primacy of subjective conscious experience: a Gestalt Bubble model.
Lehar, Steven
2003-08-01
A serious crisis is identified in theories of neurocomputation, marked by a persistent disparity between the phenomenological or experiential account of visual perception and the neurophysiological level of description of the visual system. In particular, conventional concepts of neural processing offer no explanation for the holistic global aspects of perception identified by Gestalt theory. The problem is paradigmatic and can be traced to contemporary concepts of the functional role of the neural cell, known as the Neuron Doctrine. In the absence of an alternative neurophysiologically plausible model, I propose a perceptual modeling approach, to model the percept as experienced subjectively, rather than modeling the objective neurophysiological state of the visual system that supposedly subserves that experience. A Gestalt Bubble model is presented to demonstrate how the elusive Gestalt principles of emergence, reification, and invariance can be expressed in a quantitative model of the subjective experience of visual consciousness. That model in turn reveals a unique computational strategy underlying visual processing, which is unlike any algorithm devised by man, and certainly unlike the atomistic feed-forward model of neurocomputation offered by the Neuron Doctrine paradigm. The perceptual modeling approach reveals the primary function of perception as that of generating a fully spatial virtual-reality replica of the external world in an internal representation. The common objections to this "picture-in-the-head" concept of perceptual representation are shown to be ill founded.
Prada, A F; Chu, M L; Guzman, J A; Moriasi, D N
2017-05-15
Evaluating the effectiveness of agricultural land management practices in minimizing environmental impacts using models is challenged by the presence of inherent uncertainties during the model development stage. One issue faced during the model development stage is the uncertainty involved in model parameterization. Using a single optimized set of parameters (one snapshot) to represent baseline conditions of the system limits the applicability and robustness of the model to properly represent future or alternative scenarios. The objective of this study was to develop a framework that facilitates model parameter selection while evaluating uncertainty to assess the impacts of land management practices at the watershed scale. The model framework was applied to the Lake Creek watershed located in southwestern Oklahoma, USA. A two-step probabilistic approach was implemented to parameterize the Agricultural Policy/Environmental eXtender (APEX) model using global uncertainty and sensitivity analysis to estimate the full spectrum of total monthly water yield (WYLD) and total monthly Nitrogen loads (N) in the watershed under different land management practices. Twenty-seven models were found to represent the baseline scenario in which uncertainty of up to 29% and 400% in WYLD and N, respectively, is plausible. Changing the land cover to pasture manifested the highest decrease in N to up to 30% for a full pasture coverage while changing to full winter wheat cover can increase the N up to 11%. The methodology developed in this study was able to quantify the full spectrum of system responses, the uncertainty associated with them, and the most important parameters that drive their variability. Results from this study can be used to develop strategic decisions on the risks and tradeoffs associated with different management alternatives that aim to increase productivity while also minimizing their environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
Barry, Dwight; McDonald, Shea
2013-01-01
Climate change could significantly influence seasonal streamflow and water availability in the snowpack-fed watersheds of Washington, USA. Descriptions of snowpack decline often use linear ordinary least squares (OLS) models to quantify this change. However, the region's precipitation is known to be related to climate cycles. If snowpack decline is more closely related to these cycles, an OLS model cannot account for this effect, and thus both descriptions of trends and estimates of decline could be inaccurate. We used intervention analysis to determine whether snow water equivalent (SWE) in 25 long-term snow courses within the Olympic and Cascade Mountains are more accurately described by OLS (to represent gradual change), stationary (to represent no change), or step-stationary (to represent climate cycling) models. We used Bayesian information-theoretic methods to determine these models' relative likelihood, and we found 90 models that could plausibly describe the statistical structure of the 25 snow courses' time series. Posterior model probabilities of the 29 "most plausible" models ranged from 0.33 to 0.91 (mean = 0.58, s = 0.15). The majority of these time series (55%) were best represented as step-stationary models with a single breakpoint at 1976/77, coinciding with a major shift in the Pacific Decadal Oscillation. However, estimates of SWE decline differed by as much as 35% between statistically plausible models of a single time series. This ambiguity is a critical problem for water management policy. Approaches such as intervention analysis should become part of the basic analytical toolkit for snowpack or other climatic time series data.
Random regression analyses using B-splines to model growth of Australian Angus cattle
Meyer, Karin
2005-01-01
Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011
The coexistence of alternative and scientific conceptions in physics
NASA Astrophysics Data System (ADS)
Ozdemir, Omer F.
The purpose of this study was to inquire about the simultaneous coexistence of alternative and scientific conceptions in the domain of physics. This study was particularly motivated by several arguments put forward in opposition to the Conceptual Change Model. In the simplest form, these arguments state that people construct different domains of knowledge and different modes of perception in different situations. Therefore, holding different conceptualizations is unavoidable and expecting a replacement in an individual's conceptual structure is not plausible in terms of instructional practices. The following research questions were generated to inquire about this argument: (1) Do individuals keep their alternative conceptions after they have acquired scientific conceptions? (2) Assuming that individuals who acquired scientific conceptions also have alternative conceptions, how are these different conceptions nested in their conceptual structure? (3) What kind of knowledge, skills, and reasoning are necessary to transfer scientific principles instead of alternative ones in the construction of a valid model? Analysis of the data collected from the non-physics group indicated that the nature of alternative conceptions is framed by two types of reasoning: reasoning by mental simulation and semiformal reasoning. Analysis of the data collected from the physics group revealed that mental images or scenes feeding reasoning by mental simulation had not disappeared after the acquisition of scientific conceptions. The analysis of data also provided enough evidence to conclude that alternative principles feeding semiformal reasoning have not necessarily disappeared after the acquisition of scientific conceptions. However, in regard to semiformal reasoning, compartmentalization was not as clear as the case demonstrated in reasoning by mental simulation; instead semiformal and scientific reasoning are intertwined in a way that the components of semiformal reasoning can easily take their place among the components of scientific reasoning. In spite of the fact that the coexistence of multiple conceptions might obstruct the transfer of scientific conceptions in problem-solving situations, several factors stimulating the use of scientific conceptions were noticed explicitly. These factors were categorized as follows: (a) the level of individuals' domain specific knowledge in the corresponding field, (b) the level of individuals' knowledge about the process of science (how science generates its knowledge claims), (c) the level of individuals' awareness of different types of reasoning and conceptions, and (d) the context in which the problem is situated. (Abstract shortened by UMI.)
Shizgal, Peter
2012-01-01
Almost 80 years ago, Lionel Robbins proposed a highly influential definition of the subject matter of economics: the allocation of scarce means that have alternative ends. Robbins confined his definition to human behavior, and he strove to separate economics from the natural sciences in general and from psychology in particular. Nonetheless, I extend his definition to the behavior of non-human animals, rooting my account in psychological processes and their neural underpinnings. Some historical developments are reviewed that render such a view more plausible today than would have been the case in Robbins' time. To illustrate a neuroeconomic perspective on decision making in non-human animals, I discuss research on the rewarding effect of electrical brain stimulation. Central to this discussion is an empirically based, functional/computational model of how the subjective intensity of the electrical reward is computed and combined with subjective costs so as to determine the allocation of time to the pursuit of reward. Some successes achieved by applying the model are discussed, along with limitations, and evidence is presented regarding the roles played by several different neural populations in processes posited by the model. I present a rationale for marshaling convergent experimental methods to ground psychological and computational processes in the activity of identified neural populations, and I discuss the strengths, weaknesses, and complementarity of the individual approaches. I then sketch some recent developments that hold great promise for advancing our understanding of structure-function relationships in neuroscience in general and in the neuroeconomic study of decision making in particular.
Spectral features of tidal disruption candidates and alternative origins for such transient flares
NASA Astrophysics Data System (ADS)
Saxton, Curtis J.; Perets, Hagai B.; Baskin, Alexei
2018-03-01
UV and optically selected candidates for stellar tidal disruption events (TDEs) often exhibit broad spectral features (He II emission, H α emission, or absorption lines) on a blackbody-like continuum (104 K≲ T≲ 105 K). The lines presumably emit from TDE debris or circumnuclear clouds photoionized by the flare. Line velocities however are much lower than expected from a stellar disruption by supermassive black hole (SMBH), and are somewhat faster than expected for the broad line region (BLR) clouds of a persistently active galactic nucleus (AGN). The distinctive spectral states are not strongly related to observed luminosity and velocity, nor to SMBH mass estimates. We use exhaustive photoionization modelling to map the domain of fluxes and cloud properties that yield (e.g.) an He-overbright state where a large He II(4686 Å)/H α line ratio creates an illusion of helium enrichment. Although observed line ratios occur in a plausible minority of cases, AGN-like illumination cannot reproduce the observed equivalent widths. We therefore propose to explain these properties by a light-echo photoionization model: the initial flash of a hot blackbody (detonation) excites BLR clouds, which are then seen superimposed on continuum from a later, expanded, cooled stage of the luminous source. The implied cloud mass is substellar, which may be inconsistent with a TDE. Given these and other inconsistencies with TDE models (e.g. host-galaxies distribution) we suggest to also consider alternative origins for these nuclear flares, which we briefly discuss (e.g. nuclear supernovae and starved/subluminous AGNs).
Prior probability and feature predictability interactively bias perceptual decisions
Dunovan, Kyle E.; Tremel, Joshua J.; Wheeler, Mark E.
2014-01-01
Anticipating a forthcoming sensory experience facilitates perception for expected stimuli but also hinders perception for less likely alternatives. Recent neuroimaging studies suggest that expectation biases arise from feature-level predictions that enhance early sensory representations and facilitate evidence accumulation for contextually probable stimuli while suppressing alternatives. Reasonably then, the extent to which prior knowledge biases subsequent sensory processing should depend on the precision of expectations at the feature level as well as the degree to which expected features match those of an observed stimulus. In the present study we investigated how these two sources of uncertainty modulated pre- and post-stimulus bias mechanisms in the drift-diffusion model during a probabilistic face/house discrimination task. We tested several plausible models of choice bias, concluding that predictive cues led to a bias in both the starting-point and rate of evidence accumulation favoring the more probable stimulus category. We further tested the hypotheses that prior bias in the starting-point was conditional on the feature-level uncertainty of category expectations and that dynamic bias in the drift-rate was modulated by the match between expected and observed stimulus features. Starting-point estimates suggested that subjects formed a constant prior bias in favor of the face category, which exhibits less feature-level variability, that was strengthened or weakened by trial-wise predictive cues. Furthermore, we found that the gain on face/house evidence was increased for stimuli with less ambiguous features and that this relationship was enhanced by valid category expectations. These findings offer new evidence that bridges psychological models of decision-making with recent predictive coding theories of perception. PMID:24978303
Goodpaster, Jason D.; Weber, Adam Z.
2017-01-01
Electrochemical reduction of CO2 using renewable sources of electrical energy holds promise for converting CO2 to fuels and chemicals. Since this process is complex and involves a large number of species and physical phenomena, a comprehensive understanding of the factors controlling product distribution is required. While the most plausible reaction pathway is usually identified from quantum-chemical calculation of the lowest free-energy pathway, this approach can be misleading when coverages of adsorbed species determined for alternative mechanism differ significantly, since elementary reaction rates depend on the product of the rate coefficient and the coverage of species involved in the reaction. Moreover, cathode polarization can influence the kinetics of CO2 reduction. Here, we present a multiscale framework for ab initio simulation of the electrochemical reduction of CO2 over an Ag(110) surface. A continuum model for species transport is combined with a microkinetic model for the cathode reaction dynamics. Free energies of activation for all elementary reactions are determined from density functional theory calculations. Using this approach, three alternative mechanisms for CO2 reduction were examined. The rate-limiting step in each mechanism is **COOH formation at higher negative potentials. However, only via the multiscale simulation was it possible to identify the mechanism that leads to a dependence of the rate of CO formation on the partial pressure of CO2 that is consistent with experiments. Simulations based on this mechanism also describe the dependence of the H2 and CO current densities on cathode voltage that are in strikingly good agreement with experimental observation. PMID:28973926
Climate change on the Colorado River: a method to search for robust management strategies
NASA Astrophysics Data System (ADS)
Keefe, R.; Fischbach, J. R.
2010-12-01
The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.
Shizgal, Peter
2011-01-01
Almost 80 years ago, Lionel Robbins proposed a highly influential definition of the subject matter of economics: the allocation of scarce means that have alternative ends. Robbins confined his definition to human behavior, and he strove to separate economics from the natural sciences in general and from psychology in particular. Nonetheless, I extend his definition to the behavior of non-human animals, rooting my account in psychological processes and their neural underpinnings. Some historical developments are reviewed that render such a view more plausible today than would have been the case in Robbins’ time. To illustrate a neuroeconomic perspective on decision making in non-human animals, I discuss research on the rewarding effect of electrical brain stimulation. Central to this discussion is an empirically based, functional/computational model of how the subjective intensity of the electrical reward is computed and combined with subjective costs so as to determine the allocation of time to the pursuit of reward. Some successes achieved by applying the model are discussed, along with limitations, and evidence is presented regarding the roles played by several different neural populations in processes posited by the model. I present a rationale for marshaling convergent experimental methods to ground psychological and computational processes in the activity of identified neural populations, and I discuss the strengths, weaknesses, and complementarity of the individual approaches. I then sketch some recent developments that hold great promise for advancing our understanding of structure–function relationships in neuroscience in general and in the neuroeconomic study of decision making in particular. PMID:22363253
Giant impactors - Plausible sizes and populations
NASA Technical Reports Server (NTRS)
Hartmann, William K.; Vail, S. M.
1986-01-01
The largest sizes of planetesimals required to explain spin properties of planets are investigated in the context of the impact-trigger hypothesis of lunar origin. Solar system models with different large impactor sources are constructed and stochastic variations in obliquities and rotation periods resulting from each source are studied. The present study finds it highly plausible that earth was struck by a body of about 0.03-0.12 earth masses with enough energy and angular momentum to dislodge mantle material and form the present earth-moon system.
SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.
Weight, Michael D; Harpending, Henry
2017-01-01
The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.
Isolating the anthropogenic component of Arctic warming
Chylek, Petr; Hengartner, Nicholas; Lesins, Glen; ...
2014-05-28
Structural equation modeling is used in statistical applications as both confirmatory and exploratory modeling to test models and to suggest the most plausible explanation for a relationship between the independent and the dependent variables. Although structural analysis cannot prove causation, it can suggest the most plausible set of factors that influence the observed variable. Here, we apply structural model analysis to the annual mean Arctic surface air temperature from 1900 to 2012 to find the most effective set of predictors and to isolate the anthropogenic component of the recent Arctic warming by subtracting the effects of natural forcing and variabilitymore » from the observed temperature. We also find that anthropogenic greenhouse gases and aerosols radiative forcing and the Atlantic Multidecadal Oscillation internal mode dominate Arctic temperature variability. Finally, our structural model analysis of observational data suggests that about half of the recent Arctic warming of 0.64 K/decade may have anthropogenic causes.« less
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
NASA Astrophysics Data System (ADS)
Mauritsen, Thorsten; Stevens, Bjorn
2015-05-01
Equilibrium climate sensitivity to a doubling of CO2 falls between 2.0 and 4.6 K in current climate models, and they suggest a weak increase in global mean precipitation. Inferences from the observational record, however, place climate sensitivity near the lower end of this range and indicate that models underestimate some of the changes in the hydrological cycle. These discrepancies raise the possibility that important feedbacks are missing from the models. A controversial hypothesis suggests that the dry and clear regions of the tropical atmosphere expand in a warming climate and thereby allow more infrared radiation to escape to space. This so-called iris effect could constitute a negative feedback that is not included in climate models. We find that inclusion of such an effect in a climate model moves the simulated responses of both temperature and the hydrological cycle to rising atmospheric greenhouse gas concentrations closer to observations. Alternative suggestions for shortcomings of models -- such as aerosol cooling, volcanic eruptions or insufficient ocean heat uptake -- may explain a slow observed transient warming relative to models, but not the observed enhancement of the hydrological cycle. We propose that, if precipitating convective clouds are more likely to cluster into larger clouds as temperatures rise, this process could constitute a plausible physical mechanism for an iris effect.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference
NASA Astrophysics Data System (ADS)
Solana-Ortega, Alberto; Solana, Vicente
2009-12-01
In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.
Pharmacologic Hemostatic Agents in Total Joint Arthroplasty-A Cost-Effectiveness Analysis.
Ramkumar, Dipak B; Ramkumar, Niveditta; Tapp, Stephanie J; Moschetti, Wayne E
2018-03-03
Total knee and hip arthroplasties can be associated with substantial blood loss, affecting morbidity and even mortality. Two pharmacological antifibrinolytics, ε-aminocaproic acid (EACA) and tranexamic acid (TXA) have been used to minimize perioperative blood loss, but both have associated morbidity. Given the added cost of these medications and the risks associated with then, a cost-effectiveness analysis was undertaken to ascertain the best strategy. A cost-effectiveness model was constructed using the payoffs of cost (in United States dollars) and effectiveness (quality-adjusted life expectancy, in days). The medical literature was used to ascertain various complications, their probabilities, utility values, and direct medical costs associated with various health states. A time horizon of 10 years and a willingness to pay threshold of $100,000 was used. The total cost and effectiveness (quality-adjusted life expectancy, in days) was $459.77, $951.22, and $1174.87 and 3411.19, 3248.02, and 3342.69 for TXA, no pharmacologic hemostatic agent, and EACA, respectively. Because TXA is less expensive and more effective than the competing alternatives, it was the favored strategy. One-way sensitivity analyses for probability of transfusion and myocardial infarction for all 3 strategies revealed that TXA remains the dominant strategy across all clinically plausible values. TXA, when compared with no pharmacologic hemostatic agent and with EACA, is the most cost-effective strategy to minimize intraoperative blood loss in hip and knee total joint arthroplasties. These findings are robust to sensitivity analyses using clinically plausible probabilities. Copyright © 2018 Elsevier Inc. All rights reserved.
Geophysical survey of the proposed Tsenkher impact structure, Gobi Altai, Mongolia
NASA Astrophysics Data System (ADS)
Ormö, Jens; Gomez-Ortiz, David; Komatsu, Goro; Bayaraa, Togookhuu; Tserendug, Shoovdor
2010-03-01
We have performed forward magnetic and gravity modeling of data obtained during the 2007 expedition to the 3.7km in diameter, circular, Tsenkher structure, Mongolia, in order to evaluate the cause of its formation. Extensive occurrences of brecciated rocks, mainly in the form of an ejecta blanket outside the elevated rim of the structure, support an explosive origin (e.g., cosmic impact, explosive volcanism). The host rocks in the area are mainly weakly magnetic, silica-rich sandstones, and siltstones. A near absence of surface exposures of volcanic rocks makes any major volcanic structures (e.g., caldera) unlikely. Likewise, the magnetic models exclude any large, subsurface, intrusive body. This is supported by an 8mGal gravity low over the structure indicating a subsurface low density body. Instead, the best fit is achieved for a bowl-shaped structure with a slight central rise as expected for an impact crater of this size in mainly sedimentary target. The structure can be either root-less (i.e., impact crater) or rooted with a narrow feeder dyke with relatively higher magnetic susceptibility and density (i.e., volcanic maar crater). The geophysical signature, the solitary appearance, the predominantly sedimentary setting, and the comparably large size of the Tsenkher structure favor the impact crater alternative. However, until mineralogical/geochemical evidence for an impact is presented, the maar alternative remains plausible although exceptional as it would make the Tsenkher structure one of the largest in the world in an unusual setting for maar craters.
Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model.
Wichary, Szymon; Smolen, Tomasz
2016-01-01
In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals.
Vehicle lightweighting energy use impacts in U.S. light-duty vehicle fleet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Sujit; Graziano, Diane; Upadhyayula, Venkata K. K.
In this article, we estimate the potential energy benefits of lightweighting the light-duty vehicle fleet from both vehicle manufacturing and use perspectives using plausible lightweight vehicle designs involving several alternative lightweight materials, low- and high-end estimates of vehicle manufacturing energy, conventional and alternative powertrains, and two different market penetration scenarios for alternative powertrain light-duty vehicles at the fleet level. Cumulative life cycle energy savings (through 2050) across the nine material scenarios based on the conventional powertrain in the U.S. vehicle fleet range from -29 to 94 billion GJ, with the greatest savings achieved by multi-material vehicles that select different lightweightmore » materials to meet specific design purposes. Lightweighting alternative-powertrain vehicles could produce significant energy savings in the U.S. vehicle fleet, although their improved powertrain efficiencies lessen the energy savings opportunities for lightweighting. A maximum level of cumulative energy savings of lightweighting the U.S. light-duty vehicle through 2050 is estimated to be 66.1billion GJ under the conventional-vehicle dominated business-as-usual penetration scenario.« less
Piantadosi, Steven T.; Hayden, Benjamin Y.
2015-01-01
Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613
Enhanced timing abilities in percussionists generalize to rhythms without a musical beat.
Cameron, Daniel J; Grahn, Jessica A
2014-01-01
The ability to entrain movements to music is arguably universal, but it is unclear how specialized training may influence this. Previous research suggests that percussionists have superior temporal precision in perception and production tasks. Such superiority may be limited to temporal sequences that resemble real music or, alternatively, may generalize to musically implausible sequences. To test this, percussionists and nonpercussionists completed two tasks that used rhythmic sequences varying in musical plausibility. In the beat tapping task, participants tapped with the beat of a rhythmic sequence over 3 stages: finding the beat (as an initial sequence played), continuation of the beat (as a second sequence was introduced and played simultaneously), and switching to a second beat (the initial sequence finished, leaving only the second). The meters of the two sequences were either congruent or incongruent, as were their tempi (minimum inter-onset intervals). In the rhythm reproduction task, participants reproduced rhythms of four types, ranging from high to low musical plausibility: Metric simple rhythms induced a strong sense of the beat, metric complex rhythms induced a weaker sense of the beat, nonmetric rhythms had no beat, and jittered nonmetric rhythms also had no beat as well as low temporal predictability. For both tasks, percussionists performed more accurately than nonpercussionists. In addition, both groups were better with musically plausible than implausible conditions. Overall, the percussionists' superior abilities to entrain to, and reproduce, rhythms generalized to musically implausible sequences.
Nolan, Francis; Jeon, Hae-Sung
2014-12-19
Is speech rhythmic? In the absence of evidence for a traditional view that languages strive to coordinate either syllables or stress-feet with regular time intervals, we consider the alternative that languages exhibit contrastive rhythm subsisting merely in the alternation of stronger and weaker elements. This is initially plausible, particularly for languages with a steep 'prominence gradient', i.e. a large disparity between stronger and weaker elements; but we point out that alternation is poorly achieved even by a 'stress-timed' language such as English, and, historically, languages have conspicuously failed to adopt simple phonological remedies that would ensure alternation. Languages seem more concerned to allow 'syntagmatic contrast' between successive units and to use durational effects to support linguistic functions than to facilitate rhythm. Furthermore, some languages (e.g. Tamil, Korean) lack the lexical prominence which would most straightforwardly underpin prominence of alternation. We conclude that speech is not incontestibly rhythmic, and may even be antirhythmic. However, its linguistic structure and patterning allow the metaphorical extension of rhythm in varying degrees and in different ways depending on the language, and it is this analogical process which allows speech to be matched to external rhythms. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Nolan, Francis; Jeon, Hae-Sung
2014-01-01
Is speech rhythmic? In the absence of evidence for a traditional view that languages strive to coordinate either syllables or stress-feet with regular time intervals, we consider the alternative that languages exhibit contrastive rhythm subsisting merely in the alternation of stronger and weaker elements. This is initially plausible, particularly for languages with a steep ‘prominence gradient’, i.e. a large disparity between stronger and weaker elements; but we point out that alternation is poorly achieved even by a ‘stress-timed’ language such as English, and, historically, languages have conspicuously failed to adopt simple phonological remedies that would ensure alternation. Languages seem more concerned to allow ‘syntagmatic contrast’ between successive units and to use durational effects to support linguistic functions than to facilitate rhythm. Furthermore, some languages (e.g. Tamil, Korean) lack the lexical prominence which would most straightforwardly underpin prominence of alternation. We conclude that speech is not incontestibly rhythmic, and may even be antirhythmic. However, its linguistic structure and patterning allow the metaphorical extension of rhythm in varying degrees and in different ways depending on the language, and it is this analogical process which allows speech to be matched to external rhythms. PMID:25385774
DOT National Transportation Integrated Search
2002-01-01
Business models and cost recovery are the critical factors for determining the sustainability of the traveler information service, and 511. In March 2001 the Policy Committee directed the 511 Working Group to investigate plausible business models and...
Vasconcelos, A G; Almeida, R M; Nobre, F F
2001-08-01
This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.
Modeling volcano growth on the Island of Hawaii: deep-water perspectives
Lipman, Peter W.; Calvert, Andrew T.
2013-01-01
Recent ocean-bottom geophysical surveys, dredging, and dives, which complement surface data and scientific drilling at the Island of Hawaii, document that evolutionary stages during volcano growth are more diverse than previously described. Based on combining available composition, isotopic age, and geologically constrained volume data for each of the component volcanoes, this overview provides the first integrated models for overall growth of any Hawaiian island. In contrast to prior morphologic models for volcano evolution (preshield, shield, postshield), growth increasingly can be tracked by age and volume (magma supply), defining waxing alkalic, sustained tholeiitic, and waning alkalic stages. Data and estimates for individual volcanoes are used to model changing magma supply during successive compositional stages, to place limits on volcano life spans, and to interpret composite assembly of the island. Volcano volumes vary by an order of magnitude; peak magma supply also varies sizably among edifices but is challenging to quantify because of uncertainty about volcano life spans. Three alternative models are compared: (1) near-constant volcano propagation, (2) near-equal volcano durations, (3) high peak-tholeiite magma supply. These models define inconsistencies with prior geodynamic models, indicate that composite growth at Hawaii peaked ca. 800–400 ka, and demonstrate a lower current rate. Recent age determinations for Kilauea and Kohala define a volcano propagation rate of 8.6 cm/yr that yields plausible inception ages for other volcanoes of the Kea trend. In contrast, a similar propagation rate for the less-constrained Loa trend would require inception of Loihi Seamount in the future and ages that become implausibly large for the older volcanoes. An alternative rate of 10.6 cm/yr for Loa-trend volcanoes is reasonably consistent with ages and volcano spacing, but younger Loa volcanoes are offset from the Kea trend in age-distance plots. Variable magma flux at the Island of Hawaii, and longer-term growth of the Hawaiian chain as discrete islands rather than a continuous ridge, may record pulsed magma flow in the hotspot/plume source.
NASA Astrophysics Data System (ADS)
Han, B.; Flores, A. N.; Benner, S. G.
2017-12-01
In semiarid and arid regions where water supply is intensively managed, future water scarcity is a product of complex interactions between climate change and human activities. Evaluating future water scarcity under alternative scenarios of climate change, therefore, necessitates modeling approaches that explicitly represent the coupled biophysical and social processes responsible for the redistribution of water in these regions. At regional scales a particular challenge lies in adequately capturing not only the central tendencies of change in projections of climate change, but also the associated plausible range of variability in those projections. This study develops a framework that combines a stochastic weather generator, historical climate observations, and statistically downscaled General Circulation Model (GCM) projections. The method generates a large ensemble of daily climate realizations, avoiding deficiencies of using a few or mean values of individual GCM realizations. Three climate change scenario groups reflecting the historical, RCP4.5, and RCP8.5 future projections are developed. Importantly, the model explicitly captures the spatiotemporally varying irrigation activities as constrained by local water rights in a rapidly growing, semi-arid human-environment system in southwest Idaho. We use this modeling framework to project water use and scarcity patterns under the three future climate change scenarios. The model is built using the Envision alternative futures modeling framework. Climate projections for the region show future increases in both precipitation and temperature, especially under the RCP8.5 scenario. The increase of temperature has a direct influence on the increase of the irrigation water use and water scarcity, while the influence of increased precipitation on water use is less clear. The predicted changes are potentially useful in identifying areas in the watershed particularly sensitive to water scarcity, the relative importance of changes in precipitation versus temperature as a driver of scarcity, and potential shortcomings of the current water management framework in the region.
Kong, Deguo; MacLeod, Matthew; Cousins, Ian T
2014-09-01
The effect of projected future changes in temperature, wind speed, precipitation and particulate organic carbon on concentrations of persistent organic chemicals in the Baltic Sea regional environment is evaluated using the POPCYCLING-Baltic multimedia chemical fate model. Steady-state concentrations of hypothetical perfectly persistent chemicals with property combinations that encompass the entire plausible range for non-ionizing organic substances are modelled under two alternative climate change scenarios (IPCC A2 and B2) and compared to a baseline climate scenario. The contributions of individual climate parameters are deduced in model experiments in which only one of the four parameters is changed from the baseline scenario. Of the four selected climate parameters, temperature is the most influential, and wind speed is least. Chemical concentrations in the Baltic region are projected to change by factors of up to 3.0 compared to the baseline climate scenario. For chemicals with property combinations similar to legacy persistent organic pollutants listed by the Stockholm Convention, modelled concentration ratios between two climate change scenarios and the baseline scenario range from factors of 0.5 to 2.0. This study is a first step toward quantitatively assessing climate change-induced changes in the environmental concentrations of persistent organic chemicals in the Baltic Sea region. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lucchitta, B. K.
1998-01-01
The Pathfinder spacecraft landed successfully at the mouth of the outflow channels Ares and Tiu Valles, returning a wealth of information about the surrounding landscape. One goal of the mission was to ascertain that catastrophic floods formed the outflow channels, the prevailing hypothesis for their origin. The follow-up reports on the mission proclaim that observations are "consistent" with an origin by catastrophic flood; no alternative mechanisms for channel origin are considered. Thus, the impression is given that the problem of channel origin has been solved. Yet none of the observations are diagnostic of origin by catastrophic floods. Other origins are possible but have been ignored, for instance origin as liquefaction mudflows, debris flows, mass flows, or ice flows. Here I will examine landing site observations that have been used to infer origin by catastrophic flooding and suggest alternative origins. Finally, I will highlight some new observation from Antarctica that make an ice-flow mechanism plausible for the origin of some of the outflow channels.
Telles, Guilherme P; Araújo, Graziela S; Walter, Maria E M T; Brigido, Marcelo M; Almeida, Nalvo F
2018-05-16
In phylogenetic reconstruction the result is a tree where all taxa are leaves and internal nodes are hypothetical ancestors. In a live phylogeny, both ancestral and living taxa may coexist, leading to a tree where internal nodes may be living taxa. The well-known Neighbor-Joining heuristic is largely used for phylogenetic reconstruction. We present Live Neighbor-Joining, a heuristic for building a live phylogeny. We have investigated Live Neighbor-Joining on datasets of viral genomes, a plausible scenario for its application, which allowed the construction of alternative hypothesis for the relationships among virus that embrace both ancestral and descending taxa. We also applied Live Neighbor-Joining on a set of bacterial genomes and to sets of images and texts. Non-biological data may be better explored visually when their relationship in terms of content similarity is represented by means of a phylogeny. Our experiments have shown interesting alternative phylogenetic hypothesis for RNA virus genomes, bacterial genomes and alternative relationships among images and texts, illustrating a wide range of scenarios where Live Neighbor-Joining may be used.
Hormesis, epitaxy, the structure of liquid water, and the science of homeopathy.
Mastrangelo, Domenico
2007-01-01
According to the western medical establishment, homeopathy is both "unscientific" and "implausible". A short overview of its history and the methods it uses, however, easily reveals that homeopathy is a true science, fully grounded on the scientific method and on principles, such as, among others, the Arndt-Schultz law, hormesis, and epitaxy, whose plausibility has been clearly and definitely demonstrated in a number of scientific publications and reports. Through a review of the scientific literature, an explanation of the basic principles of homeopathy is proposed based on arguments and evidence of mainstream science to demonstrate that, in spite of the claims of conventional medicine, homeopathy is both scientific and plausible and that there is no reasonable justification for its rejection by the western medical establishment. Hopefully, this hurdle will be overcome by opening academic institutions to homeopathy to enlarge the horizons of medical practice, recover the value of the human relationship with the patient, and through all this, offer the sick a real alternative and the concrete perspective of an improved quality of life.
Bredel, Markus; Ferrarese, Roberto; Harsh, Griffith R.; Yadav, Ajay K.; Bug, Eva; Maticzka, Daniel; Reichardt, Wilfried; Masilamani, Anie P.; Dai, Fangping; Kim, Hyunsoo; Hadler, Michael; Scholtens, Denise M.; Yu, Irene L.Y.; Beck, Jürgen; Srinivasasainagendra, Vinodh; Costa, Fabrizio; Baxan, Nicoleta; Pfeifer, Dietmar; Elverfeldt, Dominik v.; Backofen, Rolf; Weyerbrock, Astrid; Duarte, Christine W.; He, Xiaolin; Prinz, Marco; Chandler, James P.; Vogel, Hannes; Chakravarti, Arnab; Rich, Jeremy N.; Carro, Maria S.
2014-01-01
BACKGROUND: Tissue-specific alternative splicing is known to be critical to emergence of tissue identity during development, yet its role in malignant transformation is undefined. Tissue-specific splicing involves evolutionary-conserved, alternative exons, which represent only a minority of total alternative exons. Many, however, have functional features that influence activity in signaling pathways to profound biological effect. Given that tissue-specific splicing has a determinative role in brain development and the enrichment of genes containing tissue-specific exons for proteins with roles in signaling and development, it is thus plausible that changes in such exons could rewire normal neurogenesis towards malignant transformation. METHODS: We used integrated molecular genetic and cell biology analyses, computational biology, animal modeling, and clinical patient profiles to characterize the effect of aberrant splicing of a brain-enriched alternative exon in the membrane-binding tumor suppressor Annexin A7 (ANXA7) on oncogene regulation and brain tumorigenesis. RESULTS: We show that aberrant splicing of a tissue-specific cassette exon in ANXA7 diminishes endosomal targeting and consequent termination of the signal of the EGFR oncoprotein during brain tumorigenesis. Splicing of this exon is mediated by the ribonucleoprotein Polypyrimidine Tract-Binding Protein 1 (PTBP1), which is normally repressed during brain development but, we find, is excessively expressed in glioblastomas through either gene amplification or loss of a neuron-specific microRNA, miR-124. Silencing of PTBP1 attenuates both malignancy and angiogenesis in a stem cell-derived glioblastoma animal model characterized by a high native propensity to generate tumor endothelium or vascular pericytes to support tumor growth. We show that EGFR amplification and PTBP1 overexpression portend a similarly poor clinical outcome, further highlighting the importance of PTBP1-mediated activation of EGFR. CONCLUSIONS: Our data illustrate how anomalous splicing of a tissue-regulated exon in a constituent of an oncogenic signaling pathway eliminates its tumor suppressor function and promotes tumorigenesis. This paradigm of malignant glial transformation as a consequence of tissue-specific alternative exon splicing in a tumor suppressor, may have widespread applicability in explaining how changes in critical tissue-specific regulatory mechanisms reprogram normal development to oncogenesis. SECONDARY CATEGORY: n/a.
Of paradox and plausibility: the dynamic of change in medical law.
Harrington, John
2014-01-01
This article develops a model of change in medical law. Drawing on systems theory, it argues that medical law participates in a dynamic of 'deparadoxification' and 'reparadoxification' whereby the underlying contingency of the law is variously concealed through plausible argumentation, or revealed by critical challenge. Medical law is, thus, thoroughly rhetorical. An examination of the development of the law on abortion and on the sterilization of incompetent adults shows that plausibility is achieved through the deployment of substantive common sense and formal stylistic devices. It is undermined where these elements are shown to be arbitrary and constructed. In conclusion, it is argued that the politics of medical law are constituted by this antagonistic process of establishing and challenging provisionally stable normative regimes. © The Author [2014]. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.
Utilization of Prosodic Information in Syntactic Ambiguity Resolution
2010-01-01
Two self paced listening experiments examined the role of prosodic phrasing in syntactic ambiguity resolution. In Experiment 1, the stimuli consisted of early closure sentences (e.g., “While the parents watched, the child sang a song.”) containing transitive-biased subordinate verbs paired with plausible direct objects or intransitive-biased subordinate verbs paired with implausible direct objects. Experiment 2 also contained early closure sentences with transitively and intransitive-biased subordinate verbs, but the subordinate verbs were always followed by plausible direct objects. In both experiments, there were two prosodic conditions. In the subject-biased prosodic condition, an intonational phrase boundary marked the clausal boundary following the subordinate verb. In the object-biased prosodic condition, the clause boundary was unmarked. The results indicate that lexical and prosodic cues interact at the subordinate verb and plausibility further affects processing at the ambiguous noun. Results are discussed with respect to models of the role of prosody in sentence comprehension. PMID:20033849
Theories and models on the biological of cells in space
NASA Technical Reports Server (NTRS)
Todd, P.; Klaus, D. M.
1996-01-01
A wide variety of observations on cells in space, admittedly made under constraining and unnatural conditions in may cases, have led to experimental results that were surprising or unexpected. Reproducibility, freedom from artifacts, and plausibility must be considered in all cases, even when results are not surprising. The papers in symposium on 'Theories and Models on the Biology of Cells in Space' are dedicated to the subject of the plausibility of cellular responses to gravity -- inertial accelerations between 0 and 9.8 m/sq s and higher. The mechanical phenomena inside the cell, the gravitactic locomotion of single eukaryotic and prokaryotic cells, and the effects of inertial unloading on cellular physiology are addressed in theoretical and experimental studies.
Foo, Mathias; Sawlekar, Rucha; Kulkarni, Vishwesh V; Bates, Declan G
2016-08-01
The use of abstract chemical reaction networks (CRNs) as a modelling and design framework for the implementation of computing and control circuits using enzyme-free, entropy driven DNA strand displacement (DSD) reactions is starting to garner widespread attention in the area of synthetic biology. Previous work in this area has demonstrated the theoretical plausibility of using this approach to design biomolecular feedback control systems based on classical proportional-integral (PI) controllers, which may be constructed from CRNs implementing gain, summation and integrator operators. Here, we propose an alternative design approach that utilises the abstract chemical reactions involved in cellular signalling cycles to implement a biomolecular controller - termed a signalling-cycle (SC) controller. We compare the performance of the PI and SC controllers in closed-loop with a nonlinear second-order chemical process. Our results show that the SC controller outperforms the PI controller in terms of both performance and robustness, and also requires fewer abstract chemical reactions to implement, highlighting its potential usefulness in the construction of biomolecular control circuits.
A new perspective on the perceptual selectivity of attention under load.
Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus; Kyllingsbaek, Søren
2014-05-01
The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here we review the strengths and weaknesses of load theory and offer an alternative biologically plausible computational account that is based on the neural theory of visual attention. We argue that this new perspective provides a detailed computational account of how bottom-up and top-down information is integrated to provide efficient attentional selection and allocation of perceptual processing resources. © 2014 New York Academy of Sciences.
Electrochemical impedance spectroscopy of lithium-titanium disulfide rechargeable cells
NASA Technical Reports Server (NTRS)
Narayanan, S. R.; Shen, D. H.; Surampudi, S.; Attia, A. I.; Halpert, G.
1993-01-01
The two-terminal alternating current impedance of Li/TiS2 rechargeable cells was studied as a function of frequency, state-of-charge, and extended cycling. Analysis based on a plausible equivalent circuit model for the Li/TiS2 cell leads to evaluation of kinetic parameters for the various physicochemical processes occurring at the electrode/electrolyte interfaces. To investigate the causes of cell degradation during extended cycling, the parameters evaluated for cells cycled 5 times were compared with the parameters of cells cycled over 600 times. The findings are that the combined ohmic resistance of the electrolyte and electrodes suffers a tenfold increase after extended cycling, while the charge-transfer resistance and diffusional impedance at the TiS2/electrolyte interface are not significantIy affected. The results reflect the morphological change and increase in area of the anode due to cycling. The study also shows that overdischarge of a cathode-limited cell causes a decrease in the diffusion coefficient of the lithium ion in the cathode.
NASA Astrophysics Data System (ADS)
BalčiÅ«nas, Sergejus; Ivanov, Maksim; Grigalaitis, Robertas; Banys, Juras; Amorín, Harvey; Castro, Alicia; Algueró, Miguel
2018-05-01
The broadband dielectric properties of high sensitivity piezoelectric 0.36BiScO3-0.64PbTiO3 ceramics with average grain sizes from 1.6 μm down to 26 nm were investigated in the 100-500 K temperature range. The grain size dependence of the dielectric permittivity was analysed within the effective medium approximation. It was found that the generalised core-shell (or brick wall) model correctly explains the size dependence down to the nanoscale. For the first time, the grain bulk and boundary properties were obtained without making any assumptions of values of the parameters or simplifications. Two contributions to dielectric permittivity of the grain bulk are described. The first is the size-independent one, which follows the Curie-Weiss law. The second one is shown to plausibly follow the Kittel's law. This seems to suggest the unexpected persistence of mobile ferroelectric domains at the nanoscale (26 nm grains). Alternative explanations are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binder, Tobias; Covi, Laura; Kamada, Ayuki
Dark Matter (DM) models providing possible alternative solutions to the small-scale crisis of the standard cosmology are nowadays of growing interest. We consider DM interacting with light hidden fermions via well-motivated fundamental operators showing the resultant matter power spectrum is suppressed on subgalactic scales within a plausible parameter region. Our basic description of the evolution of cosmological perturbations relies on a fully consistent first principles derivation of a perturbed Fokker-Planck type equation, generalizing existing literature. The cosmological perturbation of the Fokker-Planck equation is presented for the first time in two different gauges, where the results transform into each other accordingmore » to the rules of gauge transformation. Furthermore, our focus lies on a derivation of a broadly applicable and easily computable collision term showing important phenomenological differences to other existing approximations. As one of the main results and concerning the small-scale crisis, we show the equal importance of vector and scalar boson mediated interactions between the DM and the light fermions.« less
Cognitive cost as dynamic allocation of energetic resources
Christie, S. Thomas; Schrater, Paul
2015-01-01
While it is widely recognized that thinking is somehow costly, involving cognitive effort and producing mental fatigue, these costs have alternatively been assumed to exist, treated as the brain's assessment of lost opportunities, or suggested to be metabolic but with implausible biological bases. We present a model of cognitive cost based on the novel idea that the brain senses and plans for longer-term allocation of metabolic resources by purposively conserving brain activity. We identify several distinct ways the brain might control its metabolic output, and show how a control-theoretic model that models decision-making with an energy budget can explain cognitive effort avoidance in terms of an optimal allocation of limited energetic resources. The model accounts for both subject responsiveness to reward and the detrimental effects of hypoglycemia on cognitive function. A critical component of the model is using astrocytic glycogen as a plausible basis for limited energetic reserves. Glycogen acts as an energy buffer that can temporarily support high neural activity beyond the rate supported by blood glucose supply. The published dynamics of glycogen depletion and repletion are consonant with a broad array of phenomena associated with cognitive cost. Our model thus subsumes both the “cost/benefit” and “limited resource” models of cognitive cost while retaining valuable contributions of each. We discuss how the rational control of metabolic resources could underpin the control of attention, working memory, cognitive look ahead, and model-free vs. model-based policy learning. PMID:26379482
Cognitive cost as dynamic allocation of energetic resources.
Christie, S Thomas; Schrater, Paul
2015-01-01
While it is widely recognized that thinking is somehow costly, involving cognitive effort and producing mental fatigue, these costs have alternatively been assumed to exist, treated as the brain's assessment of lost opportunities, or suggested to be metabolic but with implausible biological bases. We present a model of cognitive cost based on the novel idea that the brain senses and plans for longer-term allocation of metabolic resources by purposively conserving brain activity. We identify several distinct ways the brain might control its metabolic output, and show how a control-theoretic model that models decision-making with an energy budget can explain cognitive effort avoidance in terms of an optimal allocation of limited energetic resources. The model accounts for both subject responsiveness to reward and the detrimental effects of hypoglycemia on cognitive function. A critical component of the model is using astrocytic glycogen as a plausible basis for limited energetic reserves. Glycogen acts as an energy buffer that can temporarily support high neural activity beyond the rate supported by blood glucose supply. The published dynamics of glycogen depletion and repletion are consonant with a broad array of phenomena associated with cognitive cost. Our model thus subsumes both the "cost/benefit" and "limited resource" models of cognitive cost while retaining valuable contributions of each. We discuss how the rational control of metabolic resources could underpin the control of attention, working memory, cognitive look ahead, and model-free vs. model-based policy learning.
Pharmacometric Models for Characterizing the Pharmacokinetics of Orally Inhaled Drugs.
Borghardt, Jens Markus; Weber, Benjamin; Staab, Alexander; Kloft, Charlotte
2015-07-01
During the last decades, the importance of modeling and simulation in clinical drug development, with the goal to qualitatively and quantitatively assess and understand mechanisms of pharmacokinetic processes, has strongly increased. However, this increase could not equally be observed for orally inhaled drugs. The objectives of this review are to understand the reasons for this gap and to demonstrate the opportunities that mathematical modeling of pharmacokinetics of orally inhaled drugs offers. To achieve these objectives, this review (i) discusses pulmonary physiological processes and their impact on the pharmacokinetics after drug inhalation, (ii) provides a comprehensive overview of published pharmacokinetic models, (iii) categorizes these models into physiologically based pharmacokinetic (PBPK) and (clinical data-derived) empirical models, (iv) explores both their (mechanistic) plausibility, and (v) addresses critical aspects of different pharmacometric approaches pertinent for drug inhalation. In summary, pulmonary deposition, dissolution, and absorption are highly complex processes and may represent the major challenge for modeling and simulation of PK after oral drug inhalation. Challenges in relating systemic pharmacokinetics with pulmonary efficacy may be another factor contributing to the limited number of existing pharmacokinetic models for orally inhaled drugs. Investigations comprising in vitro experiments, clinical studies, and more sophisticated mathematical approaches are considered to be necessary for elucidating these highly complex pulmonary processes. With this additional knowledge, the PBPK approach might gain additional attractiveness. Currently, (semi-)mechanistic modeling offers an alternative to generate and investigate hypotheses and to more mechanistically understand the pulmonary and systemic pharmacokinetics after oral drug inhalation including the impact of pulmonary diseases.
ERIC Educational Resources Information Center
Bacharach, Samuel; Bamberger, Peter
1992-01-01
Survey data from 215 nurses (10 male) and 430 civil engineers (10 female) supported the plausibility of occupation-specific models (positing direct paths between role stressors, antecedents, and consequences) compared to generic models. A weakness of generic models is the tendency to ignore differences in occupational structure and culture. (SK)
Further Studies into Synthetic Image Generation using CameoSim
2011-08-01
preparation of the validation effort a study of BRDF models has been completed, which includes the physical plausibility of models , how measured data...the visible to shortwave infrared. In preparation of the validation effort a study of BRDF models has been completed, which includes the physical...Example..................................................................................................................... 17 4. MODELLING BRDFS
van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.
2010-01-01
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Meenesh R.; Goodpaster, Jason D.; Weber, Adam Z.
Electrochemical reduction of CO 2 using renewable sources of electrical energy holds promise for converting CO 2 to fuels and chemicals. Since this process is complex and involves a large number of species and physical phenomena, a comprehensive understanding of the factors controlling product distribution is required. While the most plausible reaction pathway is usually identified from quantum-chemical calculation of the lowest free-energy pathway, this approach can be misleading when coverages of adsorbed species determined for alternative mechanism differ significantly, since elementary reaction rates depend on the product of the rate coefficient and the coverage of species involved in themore » reaction. Moreover, cathode polarization can influence the kinetics of CO 2 reduction. Here in this work, we present a multiscale framework for ab initio simulation of the electrochemical reduction of CO 2 over an Ag(110) surface. A continuum model for species transport is combined with a microkinetic model for the cathode reaction dynamics. Free energies of activation for all elementary reactions are determined from density functional theory calculations. Using this approach, three alternative mechanisms for CO 2 reduction were examined. The rate-limiting step in each mechanism is **COOH formation at higher negative potentials. However, only via the multiscale simulation was it possible to identify the mechanism that leads to a dependence of the rate of CO formation on the partial pressure of CO 2 that is consistent with experiments. Additionally, simulations based on this mechanism also describe the dependence of the H 2 and CO current densities on cathode voltage that are in strikingly good agreement with experimental observation.« less
Singh, Meenesh R.; Goodpaster, Jason D.; Weber, Adam Z.; ...
2017-10-02
Electrochemical reduction of CO 2 using renewable sources of electrical energy holds promise for converting CO 2 to fuels and chemicals. Since this process is complex and involves a large number of species and physical phenomena, a comprehensive understanding of the factors controlling product distribution is required. While the most plausible reaction pathway is usually identified from quantum-chemical calculation of the lowest free-energy pathway, this approach can be misleading when coverages of adsorbed species determined for alternative mechanism differ significantly, since elementary reaction rates depend on the product of the rate coefficient and the coverage of species involved in themore » reaction. Moreover, cathode polarization can influence the kinetics of CO 2 reduction. Here in this work, we present a multiscale framework for ab initio simulation of the electrochemical reduction of CO 2 over an Ag(110) surface. A continuum model for species transport is combined with a microkinetic model for the cathode reaction dynamics. Free energies of activation for all elementary reactions are determined from density functional theory calculations. Using this approach, three alternative mechanisms for CO 2 reduction were examined. The rate-limiting step in each mechanism is **COOH formation at higher negative potentials. However, only via the multiscale simulation was it possible to identify the mechanism that leads to a dependence of the rate of CO formation on the partial pressure of CO 2 that is consistent with experiments. Additionally, simulations based on this mechanism also describe the dependence of the H 2 and CO current densities on cathode voltage that are in strikingly good agreement with experimental observation.« less
Sulfidic Anion Concentrations on Early Earth for Surficial Origins-of-Life Chemistry.
Ranjan, Sukrit; Todd, Zoe R; Sutherland, John D; Sasselov, Dimitar D
2018-04-08
A key challenge in origin-of-life studies is understanding the environmental conditions on early Earth under which abiogenesis occurred. While some constraints do exist (e.g., zircon evidence for surface liquid water), relatively few constraints exist on the abundances of trace chemical species, which are relevant to assessing the plausibility and guiding the development of postulated prebiotic chemical pathways which depend on these species. In this work, we combine literature photochemistry models with simple equilibrium chemistry calculations to place constraints on the plausible range of concentrations of sulfidic anions (HS - , HSO 3 - , SO 3 2- ) available in surficial aquatic reservoirs on early Earth due to outgassing of SO 2 and H 2 S and their dissolution into small shallow surface water reservoirs like lakes. We find that this mechanism could have supplied prebiotically relevant levels of SO 2 -derived anions, but not H 2 S-derived anions. Radiative transfer modeling suggests UV light would have remained abundant on the planet surface for all but the largest volcanic explosions. We apply our results to the case study of the proposed prebiotic reaction network of Patel et al. ( 2015 ) and discuss the implications for improving its prebiotic plausibility. In general, epochs of moderately high volcanism could have been especially conducive to cyanosulfidic prebiotic chemistry. Our work can be similarly applied to assess and improve the prebiotic plausibility of other postulated surficial prebiotic chemistries that are sensitive to sulfidic anions, and our methods adapted to study other atmospherically derived trace species. Key Words: Early Earth-Origin of life-Prebiotic chemistry-Volcanism-UV radiation-Planetary environments. Astrobiology 18, xxx-xxx.
Scenario analysis of the future of medicines.
Leufkens, H.; Haaijer-Ruskamp, F.; Bakker, A.; Dukes, G.
1994-01-01
Planning future policy for medicines poses difficult problems. The main players in the drug business have their own views as to how the world around them functions and how the future of medicines should be shaped. In this paper we show how a scenario analysis can provide a powerful teaching device to readjust peoples' preconceptions. Scenarios are plausible, not probable or preferable, portraits of alternative futures. A series of four of alternative scenarios were constructed: "sobriety in sufficiency," "risk avoidance," "technology on demand," and "free market unfettered." Each scenario was drawn as a narrative, documented quantitatively wherever possible, that described the world as it might be if particular trends were to dominate development. The medical community and health policy markers may use scenarios to take a long term view in order to be prepared adequately for the future. PMID:7987110
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
ERIC Educational Resources Information Center
Cangelosi, Angelo; Riga, Thomas
2006-01-01
The grounding of symbols in computational models of linguistic abilities is one of the fundamental properties of psychologically plausible cognitive models. In this article, we present an embodied model for the grounding of language in action based on epigenetic robots. Epigenetic robotics is one of the new cognitive modeling approaches to…
Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew
2016-03-01
It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model.
Modeling steam pressure under martian lava flows
Dundas, Colin M.; Keszthelyi, Laszlo P.
2013-01-01
Rootless cones on Mars are a valuable indicator of past interactions between lava and water. However, the details of the lava–water interactions are not fully understood, limiting the ability to use these features to infer new information about past water on Mars. We have developed a model for the pressurization of a dry layer of porous regolith by melting and boiling ground ice in the shallow subsurface. This model builds on previous models of lava cooling and melting of subsurface ice. We find that for reasonable regolith properties and ice depths of decimeters, explosive pressures can be reached. However, the energy stored within such lags is insufficient to excavate thick flows unless they draw steam from a broader region than the local eruption site. These results indicate that lag pressurization can drive rootless cone formation under favorable circumstances, but in other instances molten fuel–coolant interactions are probably required. We use the model results to consider a range of scenarios for rootless cone formation in Athabasca Valles. Pressure buildup by melting and boiling ice under a desiccated lag is possible in some locations, consistent with the expected distribution of ice implanted from atmospheric water vapor. However, it is uncertain whether such ice has existed in the vicinity of Athabasca Valles in recent history. Plausible alternative sources include surface snow or an aqueous flood shortly before the emplacement of the lava flow.
Compositional Evolution of Saturn's Rings Due to Meteoroid Bombardment
NASA Technical Reports Server (NTRS)
Cuzzi, J.; Estrada, P.; Young, Richard E. (Technical Monitor)
1997-01-01
In this paper we address the question of compositional evolution in planetary ring systems subsequent to meteoroid bombardment. The huge surface area to mass ratio of planetary rings ensures that this is an important process, even with current uncertainties on the meteoroid flux. We develop a new model which includes both direct deposition of extrinsic meteoritic "pollutants", and ballistic transport of the increasingly polluted ring material as impact ejecta. Our study includes detailed radiative transfer modeling of ring particle spectral reflectivities based on refractive indices of realistic constituents. Voyager data have shown that the lower optical depth regions in Saturn's rings (the C ring and Cassini Division) have darker and less red particles than the optically thicken A and B rings. These coupled structural-compositional groupings have never been explained; we present and explore the hypothesis that global scale color and compositional differences in the main rings of Saturn arise naturally from extrinsic meteoroid bombardment of a ring system which was initially composed primarily, but not entirely, of water ice. We find that the regional color and albedo differences can be understood if all ring material was initially identical (primarily water ice, based on other data, but colored by tiny amounts of intrinsic reddish, plausibly organic, absorber) and then evolved entirely by addition and mixing of extrinsic, nearly neutrally colored. plausibly carbonaceous material. We further demonstrate that the detailed radial profile of color across the abrupt B ring - C ring boundary can.constrain key unknown parameters in the model. Using new alternates of parameter values, we estimate the duration of the exposure to extrinsic meteoroid flux of this part of the rings, at least, to be on the order of 10(exp 8) years. This conclusion is easily extended by inference to the Cassini Division and its surroundings as well. This geologically young "age" is compatible with timescales estimated elsewhere based on the evolution of ring structure due to ballistic transport, and also with other "short timescales" estimated on the grounds of gravitational torques. However, uncertainty in the flux of interplanetary debris and in the ejects yield may preclude ruling out a ring age as old as the solar system at this time.
Jiang, Ping; Chiba, Ryosuke; Takakusaki, Kaoru; Ota, Jun
2016-01-01
The development of a physiologically plausible computational model of a neural controller that can realize a human-like biped stance is important for a large number of potential applications, such as assisting device development and designing robotic control systems. In this paper, we develop a computational model of a neural controller that can maintain a musculoskeletal model in a standing position, while incorporating a 120-ms neurological time delay. Unlike previous studies that have used an inverted pendulum model, a musculoskeletal model with seven joints and 70 muscular-tendon actuators is adopted to represent the human anatomy. Our proposed neural controller is composed of both feed-forward and feedback controls. The feed-forward control corresponds to the constant activation input necessary for the musculoskeletal model to maintain a standing posture. This compensates for gravity and regulates stiffness. The developed neural controller model can replicate two salient features of the human biped stance: (1) physiologically plausible muscle activations for quiet standing; and (2) selection of a low active stiffness for low energy consumption. PMID:27655271
Multilevel models for estimating incremental net benefits in multinational studies.
Grieve, Richard; Nixon, Richard; Thompson, Simon G; Cairns, John
2007-08-01
Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Jansen, Peter A.; Watter, Scott
2012-03-01
Connectionist language modelling typically has difficulty with syntactic systematicity, or the ability to generalise language learning to untrained sentences. This work develops an unsupervised connectionist model of infant grammar learning. Following the semantic boostrapping hypothesis, the network distils word category using a developmentally plausible infant-scale database of grounded sensorimotor conceptual representations, as well as a biologically plausible semantic co-occurrence activation function. The network then uses this knowledge to acquire an early benchmark clausal grammar using correlational learning, and further acquires separate conceptual and grammatical category representations. The network displays strongly systematic behaviour indicative of the general acquisition of the combinatorial systematicity present in the grounded infant-scale language stream, outperforms previous contemporary models that contain primarily noun and verb word categories, and successfully generalises broadly to novel untrained sensorimotor grounded sentences composed of unfamiliar nouns and verbs. Limitations as well as implications to later grammar learning are discussed.
Moriarty, John; McVicar, Duncan; Higgins, Kathryn
2016-08-01
Peer effects in adolescent cannabis are difficult to estimate, due in part to the lack of appropriate data on behaviour and social ties. This paper exploits survey data that have many desirable properties and have not previously been used for this purpose. The data set, collected from teenagers in three annual waves from 2002 to 2004 contains longitudinal information about friendship networks within schools (N = 5020). We exploit these data on network structure to estimate peer effects on adolescents from their nominated friends within school using two alternative approaches to identification. First, we present a cross-sectional instrumental variable (IV) estimate of peer effects that exploits network structure at the second degree, i.e. using information on friends of friends who are not themselves ego's friends to instrument for the cannabis use of friends. Second, we present an individual fixed effects estimate of peer effects using the full longitudinal structure of the data. Both innovations allow a greater degree of control for correlated effects than is commonly the case in the substance-use peer effects literature, improving our chances of obtaining estimates of peer effects than can be plausibly interpreted as causal. Both estimates suggest positive peer effects of non-trivial magnitude, although the IV estimate is imprecise. Furthermore, when we specify identical models with behaviour and characteristics of randomly selected school peers in place of friends', we find effectively zero effect from these 'placebo' peers, lending credence to our main estimates. We conclude that cross-sectional data can be used to estimate plausible positive peer effects on cannabis use where network structure information is available and appropriately exploited. Copyright © 2016 Elsevier Ltd. All rights reserved.
Atmospheric hydrogen peroxide and Eoarchean iron formations.
Pecoits, E; Smith, M L; Catling, D C; Philippot, P; Kappler, A; Konhauser, K O
2015-01-01
It is widely accepted that photosynthetic bacteria played a crucial role in Fe(II) oxidation and the precipitation of iron formations (IF) during the Late Archean-Early Paleoproterozoic (2.7-2.4 Ga). It is less clear whether microbes similarly caused the deposition of the oldest IF at ca. 3.8 Ga, which would imply photosynthesis having already evolved by that time. Abiological alternatives, such as the direct oxidation of dissolved Fe(II) by ultraviolet radiation may have occurred, but its importance has been discounted in environments where the injection of high concentrations of dissolved iron directly into the photic zone led to chemical precipitation reactions that overwhelmed photooxidation rates. However, an outstanding possibility remains with respect to photochemical reactions occurring in the atmosphere that might generate hydrogen peroxide (H2 O2 ), a recognized strong oxidant for ferrous iron. Here, we modeled the amount of H2 O2 that could be produced in an Eoarchean atmosphere using updated solar fluxes and plausible CO2 , O2 , and CH4 mixing ratios. Irrespective of the atmospheric simulations, the upper limit of H2 O2 rainout was calculated to be <10(6) molecules cm(-2) s(-1) . Using conservative Fe(III) sedimentation rates predicted for submarine hydrothermal settings in the Eoarchean, we demonstrate that the flux of H2 O2 was insufficient by several orders of magnitude to account for IF deposition (requiring ~10(11) H2 O2 molecules cm(-2) s(-1) ). This finding further constrains the plausible Fe(II) oxidation mechanisms in Eoarchean seawater, leaving, in our opinion, anoxygenic phototrophic Fe(II)-oxidizing micro-organisms the most likely mechanism responsible for Earth's oldest IF. © 2014 John Wiley & Sons Ltd.
Functional response models to estimate feeding rates of wading birds
Collazo, J.A.; Gilliam, J.F.; Miranda-Castro, L.
2010-01-01
Forager (predator) abundance may mediate feeding rates in wading birds. Yet, when modeled, feeding rates are typically derived from the purely prey-dependent Holling Type II (HoII) functional response model. Estimates of feeding rates are necessary to evaluate wading bird foraging strategies and their role in food webs; thus, models that incorporate predator dependence warrant consideration. Here, data collected in a mangrove swamp in Puerto Rico in 1994 were reanalyzed, reporting feeding rates for mixed-species flocks after comparing fits of the HoII model, as used in the original work, to the Beddington-DeAngelis (BD) and Crowley-Martin (CM) predator-dependent models. Model CM received most support (AIC c wi = 0.44), but models BD and HoII were plausible alternatives (AIC c ??? 2). Results suggested that feeding rates were constrained by predator abundance. Reductions in rates were attributed to interference, which was consistent with the independently observed increase in aggression as flock size increased (P < 0.05). Substantial discrepancies between the CM and HoII models were possible depending on flock sizes used to model feeding rates. However, inferences derived from the HoII model, as used in the original work, were sound. While Holling's Type II and other purely prey-dependent models have fostered advances in wading bird foraging ecology, evaluating models that incorporate predator dependence could lead to a more adequate description of data and processes of interest. The mechanistic bases used to derive models used here lead to biologically interpretable results and advance understanding of wading bird foraging ecology.
Spertus, Jacob V; Normand, Sharon-Lise T
2018-04-23
High-dimensional data provide many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems. Propensity score methods are powerful causal inference tools, which are popular in health care research and are particularly useful for high-dimensional data. Recent interest has surrounded a Bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model. We discuss methods for Bayesian propensity score analysis of binary treatments, focusing on modern methods for high-dimensional Bayesian regression and the propagation of uncertainty. We introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions. Through simulations, we show the utility of horseshoe priors and Bayesian additive regression trees paired with our new estimator, while demonstrating the importance of including variance from the treatment regression model. An application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives. As measured by a falsifiability endpoint, we improved confounder adjustment compared with past observational research of the same problem. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2018-01-01
Qualitative risk assessment frameworks, such as the Productivity Susceptibility Analysis (PSA), have been developed to rapidly evaluate the risks of fishing to marine populations and prioritize management and research among species. Despite being applied to over 1,000 fish populations, and an ongoing debate about the most appropriate method to convert biological and fishery characteristics into an overall measure of risk, the assumptions and predictive capacity of these approaches have not been evaluated. Several interpretations of the PSA were mapped to a conventional age-structured fisheries dynamics model to evaluate the performance of the approach under a range of assumptions regarding exploitation rates and measures of biological risk. The results demonstrate that the underlying assumptions of these qualitative risk-based approaches are inappropriate, and the expected performance is poor for a wide range of conditions. The information required to score a fishery using a PSA-type approach is comparable to that required to populate an operating model and evaluating the population dynamics within a simulation framework. In addition to providing a more credible characterization of complex system dynamics, the operating model approach is transparent, reproducible and can evaluate alternative management strategies over a range of plausible hypotheses for the system. PMID:29856869
Angular momentum transfer in primordial discs and the rotation of the first stars
NASA Astrophysics Data System (ADS)
Hirano, Shingo; Bromm, Volker
2018-05-01
We investigate the rotation velocity of the first stars by modelling the angular momentum transfer in the primordial accretion disc. Assessing the impact of magnetic braking, we consider the transition in angular momentum transport mode at the Alfvén radius, from the dynamically dominated free-fall accretion to the magnetically dominated solid-body one. The accreting protostar at the centre of the primordial star-forming cloud rotates with close to breakup speed in the case without magnetic fields. Considering a physically motivated model for small-scale turbulent dynamo amplification, we find that stellar rotation speed quickly declines if a large fraction of the initial turbulent energy is converted to magnetic energy (≳ 0.14). Alternatively, if the dynamo process were inefficient, for amplification due to flux freezing, stars would become slow rotators if the pre-galactic magnetic field strength is above a critical value, ≃10-8.2 G, evaluated at a scale of nH = 1 cm-3, which is significantly higher than plausible cosmological seed values (˜10-15 G). Because of the rapid decline of the stellar rotational speed over a narrow range in model parameters, the first stars encounter a bimodal fate: rapid rotation at almost the breakup level, or the near absence of any rotation.
NASA Astrophysics Data System (ADS)
Pohlman, Matthew Michael
The study of heat transfer and fluid flow in a vertical Bridgman device is motivated by current industrial difficulties in growing crystals with as few defects as possible. For example, Gallium Arsenide (GaAs) is of great interest to the semiconductor industry but remains an uneconomical alternative to silicon because of the manufacturing problems. This dissertation is a two dimensional study of the fluid in an idealized Bridgman device. The model nonlinear PDEs are discretized using second order finite differencing. Newton's method solves the resulting nonlinear discrete equations. The large sparse linear systems involving the Jacobian are solved iteratively using the Generalized Minimum Residual method (GMRES). By adapting fast direct solvers for elliptic equations with simple boundary conditions, a good preconditioner is developed which is essential for GMRES to converge quickly. Trends of the fluid flow and heat transfer for typical ranges of the physical parameters are determined. Also, the size of the terms in the mathematical model are found by numerical investigation, in order to find what terms are in balance as the physical parameters vary. The results suggest the plausibility of simpler asymptotic solutions.
Balancing Selection in Species with Separate Sexes: Insights from Fisher’s Geometric Model
Connallon, Tim; Clark, Andrew G.
2014-01-01
How common is balancing selection, and what fraction of phenotypic variance is attributable to balanced polymorphisms? Despite decades of research, answers to these questions remain elusive. Moreover, there is no clear theoretical prediction about the frequency with which balancing selection is expected to arise within a population. Here, we use an extension of Fisher’s geometric model of adaptation to predict the probability of balancing selection in a population with separate sexes, wherein polymorphism is potentially maintained by two forms of balancing selection: (1) heterozygote advantage, where heterozygous individuals at a locus have higher fitness than homozygous individuals, and (2) sexually antagonistic selection (a.k.a. intralocus sexual conflict), where the fitness of each sex is maximized by different genotypes at a locus. We show that balancing selection is common under biologically plausible conditions and that sex differences in selection or sex-by-genotype effects of mutations can each increase opportunities for balancing selection. Although heterozygote advantage and sexual antagonism represent alternative mechanisms for maintaining polymorphism, they mutually exist along a balancing selection continuum that depends on population and sex-specific parameters of selection and mutation. Sexual antagonism is the dominant mode of balancing selection across most of this continuum. PMID:24812306
The promise of complementarity: Using the methods of foresight for health workforce planning.
Rees, Gareth H; Crampton, Peter; Gauld, Robin; MacDonell, Stephen
2018-05-01
Health workforce planning aims to meet a health system's needs with a sustainable and fit-for-purpose workforce, although its efficacy is reduced in conditions of uncertainty. This PhD breakthrough article offers foresight as a means of addressing this uncertainty and models its complementarity in the context of the health workforce planning problem. The article summarises the findings of a two-case multi-phase mixed method study that incorporates actor analysis, scenario development and policy Delphi. This reveals a few dominant actors of considerable influence who are in conflict over a few critical workforce issues. Using these to augment normative scenarios, developed from existing clinically developed model of care visions, a number of exploratory alternative descriptions of future workforce situations are produced for each case. Their analysis reveals that these scenarios are a reasonable facsimile of plausible futures, though some are favoured over others. Policy directions to support these favoured aspects can also be identified. This novel approach offers workforce planners and policy makers some guidance on the use of complimentary data, methods to overcome the limitations of conventional workforce forecasting and a framework for exploring the complexities and ambiguities of a health workforce's evolution.
Welch, Catherine A; Petersen, Irene; Bartlett, Jonathan W; White, Ian R; Marston, Louise; Morris, Richard W; Nazareth, Irwin; Walters, Kate; Carpenter, James
2014-01-01
Most implementations of multiple imputation (MI) of missing data are designed for simple rectangular data structures ignoring temporal ordering of data. Therefore, when applying MI to longitudinal data with intermittent patterns of missing data, some alternative strategies must be considered. One approach is to divide data into time blocks and implement MI independently at each block. An alternative approach is to include all time blocks in the same MI model. With increasing numbers of time blocks, this approach is likely to break down because of co-linearity and over-fitting. The new two-fold fully conditional specification (FCS) MI algorithm addresses these issues, by only conditioning on measurements, which are local in time. We describe and report the results of a novel simulation study to critically evaluate the two-fold FCS algorithm and its suitability for imputation of longitudinal electronic health records. After generating a full data set, approximately 70% of selected continuous and categorical variables were made missing completely at random in each of ten time blocks. Subsequently, we applied a simple time-to-event model. We compared efficiency of estimated coefficients from a complete records analysis, MI of data in the baseline time block and the two-fold FCS algorithm. The results show that the two-fold FCS algorithm maximises the use of data available, with the gain relative to baseline MI depending on the strength of correlations within and between variables. Using this approach also increases plausibility of the missing at random assumption by using repeated measures over time of variables whose baseline values may be missing. PMID:24782349
Using Dirichlet Priors to Improve Model Parameter Plausibility
ERIC Educational Resources Information Center
Rai, Dovan; Gong, Yue; Beck, Joseph E.
2009-01-01
Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…
Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model
Wichary, Szymon; Smolen, Tomasz
2016-01-01
In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals. PMID:27877103
Compact continuum brain model for human electroencephalogram
NASA Astrophysics Data System (ADS)
Kim, J. W.; Shin, H.-B.; Robinson, P. A.
2007-12-01
A low-dimensional, compact brain model has recently been developed based on physiologically based mean-field continuum formulation of electric activity of the brain. The essential feature of the new compact model is a second order time-delayed differential equation that has physiologically plausible terms, such as rapid corticocortical feedback and delayed feedback via extracortical pathways. Due to its compact form, the model facilitates insight into complex brain dynamics via standard linear and nonlinear techniques. The model successfully reproduces many features of previous models and experiments. For example, experimentally observed typical rhythms of electroencephalogram (EEG) signals are reproduced in a physiologically plausible parameter region. In the nonlinear regime, onsets of seizures, which often develop into limit cycles, are illustrated by modulating model parameters. It is also shown that a hysteresis can occur when the system has multiple attractors. As a further illustration of this approach, power spectra of the model are fitted to those of sleep EEGs of two subjects (one with apnea, the other with narcolepsy). The model parameters obtained from the fittings show good matches with previous literature. Our results suggest that the compact model can provide a theoretical basis for analyzing complex EEG signals.
Expanding the Role of Connectionism in SLA Theory
ERIC Educational Resources Information Center
Language Learning, 2013
2013-01-01
In this article, I explore how connectionism might expand its role in second language acquisition (SLA) theory by showing how some symbolic models of bilingual and second language lexical memory can be reduced to a biologically realistic (i.e., neurally plausible) connectionist model. This integration or hybridization of the two models follows the…
ERIC Educational Resources Information Center
Laszlo, Sarah; Plaut, David C.
2012-01-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between…
Conservation status of polar bears (Ursus maritimus) in relation to projected sea-ice declines
NASA Astrophysics Data System (ADS)
Laidre, K. L.; Regehr, E. V.; Akcakaya, H. R.; Amstrup, S. C.; Atwood, T.; Lunn, N.; Obbard, M.; Stern, H. L., III; Thiemann, G.; Wiig, O.
2016-12-01
Loss of Arctic sea ice due to climate change is the most serious threat to polar bears (Ursus maritimus) throughout their circumpolar range. We performed a data-based sensitivity analysis with respect to this threat by evaluating the potential response of the global polar bear population to projected sea-ice conditions. We conducted 1) an assessment of generation length for polar bears, 2) developed of a standardized sea-ice metric representing important habitat characteristics for the species; and 3) performed population projections over three generations, using computer simulation and statistical models representing alternative relationships between sea ice and polar bear abundance. Using three separate approaches, the median percent change in mean global population size for polar bears between 2015 and 2050 ranged from -4% (95% CI = -62%, 50%) to -43% (95% CI = -76%, -20%). Results highlight the potential for large reductions in the global population if sea-ice loss continues. They also highlight the large amount of uncertainty in statistical projections of polar bear abundance and the sensitivity of projections to plausible alternative assumptions. The median probability of a reduction in the mean global population size of polar bears greater than 30% over three generations was approximately 0.71 (range 0.20-0.95. The median probability of a reduction greater than 50% was approximately 0.07 (range 0-0.35), and the probability of a reduction greater than 80% was negligible.
Ward-Smith, A J
1995-06-01
Modern methods of biomechanics are applied to examine some of the outstanding feats of jumping that have been reported in the literature from classical times. It is concluded that these feats could not have been achieved using the current long-jump prescription, with the take-off and landing areas in the same horizontal plane. A possible explanation is that the landing area was some 5.5 m or more below the take-off area. Alternatively, and more plausibly, the jump could have been similar to the modern triple jump.
SUPERNOVAE, NEUTRON STARS, AND TWO KINDS OF NEUTRINO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, H.Y.
1962-08-15
The role of neutrinos in the core of a star that has undergone a supernova explosion is discussed. The existence of neutron stars, the Schwarzchild singularity in general relativity, and the meaning of conservation of baryons in the neighborhood of a Schwarzchild singularity are also considered. The problem of detection of neutron stars is discussed. It is concluded that neutron stars are the most plausible alternative for the remnant of the core of a supernova. The neutrino emission processes are divided into two groups: the neutrino associated with the meson (mu) and the production of electron neutrinos. (C.E.S.)
NASA Astrophysics Data System (ADS)
Karmalkar, A.; Sexton, D.; Murphy, J.
2017-12-01
We present exploratory work towards developing an efficient strategy to select variants of a state-of-the-art but expensive climate model suitable for climate projection studies. The strategy combines information from a set of idealized perturbed parameter ensemble (PPE) and CMIP5 multi-model ensemble (MME) experiments, and uses two criteria as basis to select model variants for a PPE suitable for future projections: a) acceptable model performance at two different timescales, and b) maintaining diversity in model response to climate change. We demonstrate that there is a strong relationship between model errors at weather and climate timescales for a variety of key variables. This relationship is used to filter out parts of parameter space that do not give credible simulations of historical climate, while minimizing the impact on ranges in forcings and feedbacks that drive model responses to climate change. We use statistical emulation to explore the parameter space thoroughly, and demonstrate that about 90% can be filtered out without affecting diversity in global-scale climate change responses. This leads to identification of plausible parts of parameter space from which model variants can be selected for projection studies.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
Scenario planning: a tool for academic health sciences libraries.
Ludwig, Logan; Giesecke, Joan; Walton, Linda
2010-03-01
Review the International Campaign to Revitalise Academic Medicine (ICRAM) Future Scenarios as a potential starting point for developing scenarios to envisage plausible futures for health sciences libraries. At an educational workshop, 15 groups, each composed of four to seven Association of Academic Health Sciences Libraries (AAHSL) directors and AAHSL/NLM Fellows, created plausible stories using the five ICRAM scenarios. Participants created 15 plausible stories regarding roles played by health sciences librarians, how libraries are used and their physical properties in response to technology, scholarly communication, learning environments and health care economic changes. Libraries are affected by many forces, including economic pressures, curriculum and changes in technology, health care delivery and scholarly communications business models. The future is likely to contain ICRAM scenario elements, although not all, and each, if they come to pass, will impact health sciences libraries. The AAHSL groups identified common features in their scenarios to learn lessons for now. The hope is that other groups find the scenarios useful in thinking about academic health science library futures.
The Central Role of Recognition in Auditory Perception: A Neurobiological Model
ERIC Educational Resources Information Center
McLachlan, Neil; Wilson, Sarah
2010-01-01
The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…
The challenge of risk characterization: current practice and future directions.
Gray, G M; Cohen, J T; Graham, J D
1993-01-01
Risk characterization is perhaps the most important part of risk assessment. As currently practiced, risk characterizations do not convey the degree of uncertainty in a risk estimate to risk managers, Congress, the press, and the public. Here, we use a framework put forth by an ad hoc study group of industry and government scientists and academics to critique the risk characterizations contained in two risks assessments of gasoline vapor. After discussing the strengths and weaknesses of each assessment's risk characterization, we detail an alternative approach that conveys estimates in the form of a probability distribution. The distributional approach can make use of all relevant scientific data and knowledge, including alternative data sets and all plausible mechanistic theories of carcinogenesis. As a result, this approach facilitates better public health decisions than current risk characterization procedures. We discuss methodological issues, as well as strengths and weaknesses of the distributional approach. PMID:8020444
Pedestrian evacuation modeling to reduce vehicle use for distant tsunami evacuations in Hawaiʻi
Wood, Nathan J.; Jones, Jamie; Peters, Jeff; Richards, Kevin
2018-01-01
Tsunami waves that arrive hours after generation elsewhere pose logistical challenges to emergency managers due to the perceived abundance of time and inclination of evacuees to use vehicles. We use coastal communities on the island of Oʻahu (Hawaiʻi, USA) to demonstrate regional evacuation modeling that can identify where successful pedestrian-based evacuations are plausible and where vehicle use could be discouraged. The island of Oʻahu has two tsunami-evacuation zones (standard and extreme), which provides the opportunity to examine if recommended travel modes vary based on zone. Geospatial path distance models are applied to estimate population exposure as a function of pedestrian travel time and speed out of evacuation zones. The use of the extreme zone triples the number of residents, employees, and facilities serving at-risk populations that would be encouraged to evacuate and slightly reduces the percentage of residents (98–76%) that could evacuate in less than 15 min at a plausible speed (with similar percentages for employees). Areas with lengthy evacuations are concentrated in the North Shore region for the standard zone but found all around the Oʻahu coastline for the extreme zone. The use of the extreme zone results in a 26% increase in the number of hotel visitors that would be encouraged to evacuate, and a 76% increase in the number of them that may require more than 15 min. Modeling can identify where pedestrian evacuations are plausible; however, there are logistical and behavioral issues that warrant attention before localized evacuation procedures may be realistic.
On heart rate variability and autonomic activity in homeostasis and in systemic inflammation.
Scheff, Jeremy D; Griffel, Benjamin; Corbett, Siobhan A; Calvano, Steve E; Androulakis, Ioannis P
2014-06-01
Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. Copyright © 2014 Elsevier Inc. All rights reserved.
On heart rate variability and autonomic activity in homeostasis and in systemic inflammation
Scheff, Jeremy D.; Griffel, Benjamin; Corbett, Siobhan A.; Calvano, Steve E.; Androulakis, Ioannis P.
2014-01-01
Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. PMID:24680646
Cross-lagged relationships between workplace demands, control, support, and sleep problems.
Hanson, Linda L Magnusson; Åkerstedt, Torbjörn; Näswall, Katharina; Leineweber, Constanze; Theorell, Töres; Westerlund, Hugo
2011-10-01
Sleep problems are experienced by a large part of the population. Work characteristics are potential determinants, but limited longitudinal evidence is available to date, and reverse causation is a plausible alternative. This study examines longitudinal, bidirectional relationships between work characteristics and sleep problems. Prospective cohort/two-wave panel. Sweden. 3065 working men and women approximately representative of the Swedish workforce who responded to the 2006 and 2008 waves of the Swedish Longitudinal Occupational Survey of Health (SLOSH). N/A. Bidirectional relationships between, on the one hand, workplace demands, decision authority, and support, and, on the other hand, sleep disturbances (reflecting lack of sleep continuity) and awakening problems (reflecting feelings of being insufficiently restored), were investigated by structural equation modeling. All factors were modeled as latent variables and adjusted for gender, age, marital status, education, alcohol consumption, and job change. Concerning sleep disturbances, the best fitting models were the "forward" causal model for demands and the "reverse" causal model for support. Regarding awakening problems, reciprocal models fitted the data best. Cross-lagged analyses indicates a weak relationship between demands at Time 1 and sleep disturbances at Time 2, a "reverse" relationship from support T1 to sleep disturbances T2, and bidirectional associations between work characteristics and awakening problems. In contrast to an earlier study on demands, control, sleep quality, and fatigue, this study suggests reverse and reciprocal in addition to the commonly hypothesized causal relationships between work characteristics and sleep problems based on a 2-year time lag.
Lindeman, Meghan I H; Zengel, Bettina; Skowronski, John J
2017-07-01
The affect associated with negative (or unpleasant) memories typically tends to fade faster than the affect associated with positive (or pleasant) memories, a phenomenon called the fading affect bias (FAB). We conducted a study to explore the mechanisms related to the FAB. A retrospective recall procedure was used to obtain three self-report measures (memory vividness, rehearsal frequency, affective fading) for both positive events and negative events. Affect for positive events faded less than affect for negative events, and positive events were recalled more vividly than negative events. The perceived vividness of an event (memory vividness) and the extent to which an event has been rehearsed (rehearsal frequency) were explored as possible mediators of the relation between event valence and affect fading. Additional models conceived of affect fading and rehearsal frequency as contributors to a memory's vividness. Results suggested that memory vividness was a plausible mediator of the relation between an event's valence and affect fading. Rehearsal frequency was also a plausible mediator of this relation, but only via its effects on memory vividness. Additional modelling results suggested that affect fading and rehearsal frequency were both plausible mediators of the relation between an event's valence and the event's rated memory vividness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seager, S.; Bains, W.; Hu, R.
Biosignature gas detection is one of the ultimate future goals for exoplanet atmosphere studies. We have created a framework for linking biosignature gas detectability to biomass estimates, including atmospheric photochemistry and biological thermodynamics. The new framework is intended to liberate predictive atmosphere models from requiring fixed, Earth-like biosignature gas source fluxes. New biosignature gases can be considered with a check that the biomass estimate is physically plausible. We have validated the models on terrestrial production of NO, H{sub 2}S, CH{sub 4}, CH{sub 3}Cl, and DMS. We have applied the models to propose NH{sub 3} as a biosignature gas on amore » 'cold Haber World', a planet with a N{sub 2}-H{sub 2} atmosphere, and to demonstrate why gases such as CH{sub 3}Cl must have too large of a biomass to be a plausible biosignature gas on planets with Earth or early-Earth-like atmospheres orbiting a Sun-like star. To construct the biomass models, we developed a functional classification of biosignature gases, and found that gases (such as CH{sub 4}, H{sub 2}S, and N{sub 2}O) produced from life that extracts energy from chemical potential energy gradients will always have false positives because geochemistry has the same gases to work with as life does, and gases (such as DMS and CH{sub 3}Cl) produced for secondary metabolic reasons are far less likely to have false positives but because of their highly specialized origin are more likely to be produced in small quantities. The biomass model estimates are valid to one or two orders of magnitude; the goal is an independent approach to testing whether a biosignature gas is plausible rather than a precise quantification of atmospheric biosignature gases and their corresponding biomasses.« less
Vectorial Representations of Meaning for a Computational Model of Language Comprehension
ERIC Educational Resources Information Center
Wu, Stephen Tze-Inn
2010-01-01
This thesis aims to define and extend a line of computational models for text comprehension that are humanly plausible. Since natural language is human by nature, computational models of human language will always be just that--models. To the degree that they miss out on information that humans would tap into, they may be improved by considering…
Resolving Conflicts Between Syntax and Plausibility in Sentence Comprehension
Andrews, Glenda; Ogden, Jessica E.; Halford, Graeme S.
2017-01-01
Comprehension of plausible and implausible object- and subject-relative clause sentences with and without prepositional phrases was examined. Undergraduates read each sentence then evaluated a statement as consistent or inconsistent with the sentence. Higher acceptance of consistent than inconsistent statements indicated reliance on syntactic analysis. Higher acceptance of plausible than implausible statements reflected reliance on semantic plausibility. There was greater reliance on semantic plausibility and lesser reliance on syntactic analysis for more complex object-relatives and sentences with prepositional phrases than for less complex subject-relatives and sentences without prepositional phrases. Comprehension accuracy and confidence were lower when syntactic analysis and semantic plausibility yielded conflicting interpretations. The conflict effect on comprehension was significant for complex sentences but not for less complex sentences. Working memory capacity predicted resolution of the syntax-plausibility conflict in more and less complex items only when sentences and statements were presented sequentially. Fluid intelligence predicted resolution of the conflict in more and less complex items under sequential and simultaneous presentation. Domain-general processes appear to be involved in resolving syntax-plausibility conflicts in sentence comprehension. PMID:28458748
A model of TMS-induced I-waves in motor cortex.
Rusu, Cătălin V; Murakami, Max; Ziemann, Ulf; Triesch, Jochen
2014-01-01
Transcranial magnetic stimulation (TMS) allows to manipulate neural activity non-invasively, and much research is trying to exploit this ability in clinical and basic research settings. In a standard TMS paradigm, single-pulse stimulation over motor cortex produces repetitive responses in descending motor pathways called I-waves. However, the details of how TMS induces neural activity patterns in cortical circuits to produce these responses remain poorly understood. According to a traditional view, I-waves are due to repetitive synaptic inputs to pyramidal neurons in layer 5 (L5) of motor cortex, but the potential origin of such repetitive inputs is unclear. Here we aim to test the plausibility of an alternative mechanism behind D- and I-wave generation through computational modeling. This mechanism relies on the broad distribution of conduction delays of synaptic inputs arriving at different parts of L5 cells' dendritic trees and their spike generation mechanism. Our model consists of a detailed L5 pyramidal cell and a population of layer 2 and 3 (L2/3) neurons projecting onto it with synapses exhibiting short-term depression. I-waves are simulated as superpositions of spike trains from a large population of L5 cells. Our model successfully reproduces all basic characteristics of I-waves observed in epidural responses during in vivo recordings of conscious humans. In addition, it shows how the complex morphology of L5 neurons might play an important role in the generation of I-waves. In the model, later I-waves are formed due to inputs to distal synapses, while earlier ones are driven by synapses closer to the soma. Finally, the model offers an explanation for the inhibition and facilitation effects in paired-pulse stimulation protocols. In contrast to previous models, which required either neural oscillators or chains of inhibitory interneurons acting upon L5 cells, our model is fully feed-forward without lateral connections or loops. It parsimoniously explains findings from a range of experiments and should be considered as a viable alternative explanation of the generating mechanism of I-waves. Copyright © 2014 Elsevier Inc. All rights reserved.
Explicit B-spline regularization in diffeomorphic image registration
Tustison, Nicholas J.; Avants, Brian B.
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140
Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*
Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.
2011-01-01
Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829
Protoenzymes: the case of hyperbranched polyesters
NASA Astrophysics Data System (ADS)
Mamajanov, Irena; Cody, George D.
2017-11-01
Enzymes are biopolymeric complexes that catalyse biochemical reactions and shape metabolic pathways. Enzymes usually work with small molecule cofactors that actively participate in reaction mechanisms and complex, usually globular, polymeric structures capable of specific substrate binding, encapsulation and orientation. Moreover, the globular structures of enzymes possess cavities with modulated microenvironments, facilitating the progression of reaction(s). The globular structure is ensured by long folded protein or RNA strands. Synthesis of such elaborate complexes has proven difficult under prebiotically plausible conditions. We explore here that catalysis may have been performed by alternative polymeric structures, namely hyperbranched polymers. Hyperbranched polymers are relatively complex structures that can be synthesized under prebiotically plausible conditions; their globular structure is ensured by virtue of their architecture rather than folding. In this study, we probe the ability of tertiary amine-bearing hyperbranched polyesters to form hydrophobic pockets as a reaction-promoting medium for the Kemp elimination reaction. Our results show that polyesters formed upon reaction between glycerol, triethanolamine and organic acid containing hydrophobic groups, i.e. adipic and methylsuccinic acid, are capable of increasing the rate of Kemp elimination by a factor of up to 3 over monomeric triethanolamine. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Dynamic causal modelling: a critical review of the biophysical and statistical foundations.
Daunizeau, J; David, O; Stephan, K E
2011-09-15
The goal of dynamic causal modelling (DCM) of neuroimaging data is to study experimentally induced changes in functional integration among brain regions. This requires (i) biophysically plausible and physiologically interpretable models of neuronal network dynamics that can predict distributed brain responses to experimental stimuli and (ii) efficient statistical methods for parameter estimation and model comparison. These two key components of DCM have been the focus of more than thirty methodological articles since the seminal work of Friston and colleagues published in 2003. In this paper, we provide a critical review of the current state-of-the-art of DCM. We inspect the properties of DCM in relation to the most common neuroimaging modalities (fMRI and EEG/MEG) and the specificity of inference on neural systems that can be made from these data. We then discuss both the plausibility of the underlying biophysical models and the robustness of the statistical inversion techniques. Finally, we discuss potential extensions of the current DCM framework, such as stochastic DCMs, plastic DCMs and field DCMs. Copyright © 2009 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kwakkel, Jan; Haasnoot, Marjolijn
2015-04-01
In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the performance of a candidate plan with the performance of other candidate plans across a large ensemble of plausible futures. Initial results suggest that the simplest satisficing metric, inspired by the signal to noise ratio, results in very risk averse solutions. Other satisficing metrics, which handle the average performance and the dispersion around the average separately, provide substantial additional insights into the trade off between the average performance, and the dispersion around this average. In contrast, the regret-based metrics enhance insight into the relative merits of candidate plans, while being less clear on the average performance or the dispersion around this performance. These results suggest that it is beneficial to use multiple robustness metrics when doing a robust decision analysis study. Haasnoot, M., J. H. Kwakkel, W. E. Walker and J. Ter Maat (2013). "Dynamic Adaptive Policy Pathways: A New Method for Crafting Robust Decisions for a Deeply Uncertain World." Global Environmental Change 23(2): 485-498. Kwakkel, J. H., M. Haasnoot and W. E. Walker (2014). "Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world." Climatic Change.
A Synchronization Account of False Recognition
ERIC Educational Resources Information Center
Johns, Brendan T.; Jones, Michael N.; Mewhort, Douglas J. K.
2012-01-01
We describe a computational model to explain a variety of results in both standard and false recognition. A key attribute of the model is that it uses plausible semantic representations for words, built through exposure to a linguistic corpus. A study list is encoded in the model as a gist trace, similar to the proposal of fuzzy trace theory…
NASA Astrophysics Data System (ADS)
Keane, J. T.; Johnson, B. C.; Matsuyama, I.; Siegler, M. A.
2018-04-01
New geophysical data and numerical models reveal that basin-scale impacts routinely caused the Moon to tumble (non principal axis rotation) early in its history — plausibly driving magnetic fields, erasing primordial volatiles, and more.
A plausible and consistent model is developed to obtain a quantitative description of the gradual disappearance of hexavalent chromium (Cr(VI)) from groundwater in a small-scale field tracer test and in batch kinetic experiments using aquifer sediments under similar chemical cond...
MODELING WILDLIFE RESPONSE TO LANDSCAPE CHANGE IN OREGON'S WILLAMETTE RIVER BASIN
The PATCH simulation model was used to predict the response of 17 wildlife species to
three plausible scenarios of habitat change in Oregon's Willamette River Basin. This 30
thousand square-kilometer basin comprises about 12% of the state of Oregon, encompasses extensive f...
Hong, Cheng William; Mamidipalli, Adrija; Hooker, Jonathan C.; Hamilton, Gavin; Wolfson, Tanya; Chen, Dennis H.; Dehkordy, Soudabeh Fazeli; Middleton, Michael S.; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
Background Proton density fat fraction (PDFF) estimation requires spectral modeling of the hepatic triglyceride (TG) signal. Deviations in the TG spectrum may occur, leading to bias in PDFF quantification. Purpose To investigate the effects of varying six-peak TG spectral models on PDFF estimation bias. Study Type Retrospective secondary analysis of prospectively acquired clinical research data. Population Forty-four adults with biopsy-confirmed nonalcoholic steatohepatitis. Field Strength/Sequence Confounder-corrected chemical-shift-encoded 3T MRI (using a 2D multiecho gradient-recalled echo technique with magnitude reconstruction) and MR spectroscopy. Assessment In each patient, 61 pairs of colocalized MRI-PDFF and MRS-PDFF values were estimated: one pair used the standard six-peak spectral model, the other 60 were six-peak variants calculated by adjusting spectral model parameters over their biologically plausible ranges. MRI-PDFF values calculated using each variant model and the standard model were compared, and the agreement between MRI-PDFF and MRS-PDFF was assessed. Statistical Tests MRS-PDFF and MRI-PDFF were summarized descriptively. Bland–Altman (BA) analyses were performed between PDFF values calculated using each variant model and the standard model. Linear regressions were performed between BA biases and mean PDFF values for each variant model, and between MRI-PDFF and MRS-PDFF. Results Using the standard model, mean MRS-PDFF of the study population was 17.9±8.0% (range: 4.1–34.3%). The difference between the highest and lowest mean variant MRI-PDFF values was 1.5%. Relative to the standard model, the model with the greatest absolute BA bias overestimated PDFF by 1.2%. Bias increased with increasing PDFF (P < 0.0001 for 59 of the 60 variant models). MRI-PDFF and MRS-PDFF agreed closely for all variant models (R2=0.980, P < 0.0001). Data Conclusion Over a wide range of hepatic fat content, PDFF estimation is robust across the biologically plausible range of TG spectra. Although absolute estimation bias increased with higher PDFF, its magnitude was small and unlikely to be clinically meaningful. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28851124
Synek, Alexander; Pahr, Dieter H
2018-06-01
A micro-finite element-based method to estimate the bone loading history based on bone architecture was recently presented in the literature. However, a thorough investigation of the parameter sensitivity and plausibility of this method to predict joint loads is still missing. The goals of this study were (1) to analyse the parameter sensitivity of the joint load predictions at one proximal femur and (2) to assess the plausibility of the results by comparing load predictions of ten proximal femora to in vivo hip joint forces measured with instrumented prostheses (available from www.orthoload.com ). Joint loads were predicted by optimally scaling the magnitude of four unit loads (inclined [Formula: see text] to [Formula: see text] with respect to the vertical axis) applied to micro-finite element models created from high-resolution computed tomography scans ([Formula: see text]m voxel size). Parameter sensitivity analysis was performed by varying a total of nine parameters and showed that predictions of the peak load directions (range 10[Formula: see text]-[Formula: see text]) are more robust than the predicted peak load magnitudes (range 2344.8-4689.5 N). Comparing the results of all ten femora with the in vivo loading data of ten subjects showed that peak loads are plausible both in terms of the load direction (in vivo: [Formula: see text], predicted: [Formula: see text]) and magnitude (in vivo: [Formula: see text], predicted: [Formula: see text]). Overall, this study suggests that micro-finite element-based joint load predictions are both plausible and robust in terms of the predicted peak load direction, but predicted load magnitudes should be interpreted with caution.
Newberry Volcano EGS Demonstration Stimulation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trenton T. Cladouhos, Matthew Clyne, Maisie Nichols,; Susan Petty, William L. Osborn, Laura Nofziger
2011-10-23
As a part of Phase I of the Newberry Volcano EGS Demonstration project, several data sets were collected to characterize the rock volume around the well. Fracture, fault, stress, and seismicity data has been collected by borehole televiewer, LiDAR elevation maps, and microseismic monitoring. Well logs and cuttings from the target well (NWG 55-29) and core from a nearby core hole (USGS N-2) have been analyzed to develop geothermal, geochemical, mineralogical and strength models of the rock matrix, altered zones, and fracture fillings (see Osborn et al., this volume). These characterization data sets provide inputs to models used to planmore » and predict EGS reservoir creation and productivity. One model used is AltaStim, a stochastic fracture and flow software model developed by AltaRock. The software's purpose is to model and visualize EGS stimulation scenarios and provide guidance for final planning. The process of creating an AltaStim model requires synthesis of geologic observations at the well, the modeled stress conditions, and the stimulation plan. Any geomechanical model of an EGS stimulation will require many assumptions and unknowns; thus, the model developed here should not be considered a definitive prediction, but a plausible outcome given reasonable assumptions. AltaStim is a tool for understanding the effect of known constraints, assumptions, and conceptual models on plausible outcomes.« less
Application of Adverse Outcome Pathways to U.S. EPA’s Endocrine Disruptor Screening Program
Noyes, Pamela D.; Casey, Warren M.; Dix, David J.
2017-01-01
Background: The U.S. EPA’s Endocrine Disruptor Screening Program (EDSP) screens and tests environmental chemicals for potential effects in estrogen, androgen, and thyroid hormone pathways, and it is one of the only regulatory programs designed around chemical mode of action. Objectives: This review describes the EDSP’s use of adverse outcome pathway (AOP) and toxicity pathway frameworks to organize and integrate diverse biological data for evaluating the endocrine activity of chemicals. Using these frameworks helps to establish biologically plausible links between endocrine mechanisms and apical responses when those end points are not measured in the same assay. Results: Pathway frameworks can facilitate a weight of evidence determination of a chemical’s potential endocrine activity, identify data gaps, aid study design, direct assay development, and guide testing strategies. Pathway frameworks also can be used to evaluate the performance of computational approaches as alternatives for low-throughput and animal-based assays and predict downstream key events. In cases where computational methods can be validated based on performance, they may be considered as alternatives to specific assays or end points. Conclusions: A variety of biological systems affect apical end points used in regulatory risk assessments, and without mechanistic data, an endocrine mode of action cannot be determined. Because the EDSP was designed to consider mode of action, toxicity pathway and AOP concepts are a natural fit. Pathway frameworks have diverse applications to endocrine screening and testing. An estrogen pathway example is presented, and similar approaches are being used to evaluate alternative methods and develop predictive models for androgen and thyroid pathways. https://doi.org/10.1289/EHP1304 PMID:28934726
COLLABORATION ON NHEERL EPIDEMIOLOGY STUDIES
This task will continue ORD's efforts to develop a biologically plausible, quantitative health risk model for particulate matter (PM) based on epidemiological, toxicological, and mechanistic studies using matched exposure assessments. The NERL, in collaboration with the NHEERL, ...
NASA Astrophysics Data System (ADS)
Ishihara, T.
2003-12-01
The existence of magnetic anomalies along east-west trending fracture zones in the north Pacific is well known. These anomalies are particularly prominent in the Cretaceous magnetic quiet zone, where no comparable anomalies are observed other than those associated with the Hawaiian Ridge and the Musician Seamounts in a newly compiled magnetic anomaly map. Model calculation was conducted using old magnetic and bathymetric data collected in the Cretaceous magnetic quiet zone. Two-dimensional simple models along north-south lines, which cross the Mendocino, Pioneer, Murray, Molokai and Clarion Fracture Zones, were constructed in order to clarify the sources of these magnetic anomalies. In these model calculations, it was assumed that the source bodies have normal remanent magnetizations with their inclinations of about
Bays, Rebecca B; Zabrucky, Karen M; Gagne, Phill
2012-01-01
In the current study we examined whether prevalence information and imagery encoding influence participants' general plausibility, personal plausibility, belief, and memory ratings for suggested childhood events. Results showed decreases in general and personal plausibility ratings for low prevalence events when encoding instructions were not elaborate; however, instructions to repeatedly imagine suggested events elicited personal plausibility increases for low-prevalence events, evidence that elaborate imagery negated the effect of our prevalence manipulation. We found no evidence of imagination inflation or false memory construction. We discuss critical differences in researchers' manipulations of plausibility and imagery that may influence results of false memory studies in the literature. In future research investigators should focus on the specific nature of encoding instructions when examining the development of false memories.
Real-time physics-based 3D biped character animation using an inverted pendulum model.
Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee
2010-01-01
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.
The Prospects of Whole Brain Emulation within the next Half- Century
NASA Astrophysics Data System (ADS)
Eth, Daniel; Foust, Juan-Carlos; Whale, Brandon
2013-12-01
Whole Brain Emulation (WBE), the theoretical technology of modeling a human brain in its entirety on a computer-thoughts, feelings, memories, and skills intact-is a staple of science fiction. Recently, proponents of WBE have suggested that it will be realized in the next few decades. In this paper, we investigate the plausibility of WBE being developed in the next 50 years (by 2063). We identify four essential requisite technologies: scanning the brain, translating the scan into a model, running the model on a computer, and simulating an environment and body. Additionally, we consider the cultural and social effects of WBE. We find the two most uncertain factors for WBE's future to be the development of advanced miniscule probes that can amass neural data in vivo and the degree to which the culture surrounding WBE becomes cooperative or competitive. We identify four plausible scenarios from these uncertainties and suggest the most likely scenario to be one in which WBE is realized, and the technology is used for moderately cooperative ends
Reconnaissance of the Hydrogeology of Ta'u, American Samoa
Izuka, Scot K.
2005-01-01
Analysis of existing data and information collected on a reconnaissance field visit supports a conceptual model of ground-water occurrence in Ta'u, American Samoa, in which a thin freshwater lens exists in a predominantly high-permeability aquifer that receives high rates of recharge. Because the freshwater lens is thin throughout most of the island, the productivity of wells, especially those near the coast where the lens is the thinnest, is likely to be limited by saltwater intrusion. The landfill in northwestern Ta'u is closer to the north coast of the island than to any of the existing or proposed well sites. Although this may indicate that ground water beneath the landfill would flow away from the existing and proposed well sites, this interpretation may change depending on the hydraulic properties of a fault and rift zone in the area. Of four plausible scenarios tested with a numerical ground-water flow model, only one scenario indicated that ground water from beneath the landfill would flow toward the existing and proposed well sites; the analysis does not, however, assess which of the four scenarios is most plausible. The analysis also does not consider the change in flow paths that will result from ground-water withdrawals, dispersion of contaminants during transport by ground water, other plausible hydrogeologic scenarios, transport of contaminants by surface-water flow, or that sources of contamination other than the landfill may exist. Accuracy of the hydrologic interpretations in this study is limited by the relatively sparse data available for Ta'u. Understanding water resources on Ta'u can be advanced by monitoring rainfall, stream-flow, evaporation, ground-water withdrawals, and water quality, and with accurate surveys of measuring point elevations for all wells and careful testing of well-performance. Assessing the potential for contaminants in the landfill to reach existing and proposed well sites can be improved with additional information on the landfill itself (history, construction, contents, water chemistry), surface-water flow directions, spatial distribution of ground-water levels, and the quality of water in nearby wells. Monitoring water levels and chemistry in one or more monitoring wells between the landfill and existing or proposed wells can provide a means to detect movement of contaminants before they reach production wells. Steps that can be implemented in the short term include analyzing water in the landfill and monitoring of water chemistry and water levels in all existing and new production wells. Placing future wells farther inland may mitigate saltwater intrusion problems, but the steep topography of Ta'u limits the feasibility of this approach. Alternative solutions include distributing ground-water withdrawal among several shallow-penetrating, low-yield wells.
ERIC Educational Resources Information Center
Conley, Sharon; You, Sukkyung
2014-01-01
A previous study examined role stress in relation to work outcomes; in this study, we added job structuring antecedents to a model of role stress and examined the moderating effects of locus of control. Structural equation modeling was used to assess the plausibility of our conceptual model, which specified hypothesized linkages among teachers'…
ERIC Educational Resources Information Center
Dombrowski, Stefan C.; Golay, Philippe; McGill, Ryan J.; Canivez, Gary L.
2018-01-01
Bayesian structural equation modeling (BSEM) was used to investigate the latent structure of the Differential Ability Scales-Second Edition core battery using the standardization sample normative data for ages 7-17. Results revealed plausibility of a three-factor model, consistent with publisher theory, expressed as either a higher-order (HO) or a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Conroy, M.J.; Runge, M.C.; Nichols, J.D.; Stodola, K.W.; Cooper, R.J.
2011-01-01
The broad physical and biological principles behind climate change and its potential large scale ecological impacts on biota are fairly well understood, although likely responses of biotic communities at fine spatio-temporal scales are not, limiting the ability of conservation programs to respond effectively to climate change outside the range of human experience. Much of the climate debate has focused on attempts to resolve key uncertainties in a hypothesis-testing framework. However, conservation decisions cannot await resolution of these scientific issues and instead must proceed in the face of uncertainty. We suggest that conservation should precede in an adaptive management framework, in which decisions are guided by predictions under multiple, plausible hypotheses about climate impacts. Under this plan, monitoring is used to evaluate the response of the system to climate drivers, and management actions (perhaps experimental) are used to confront testable predictions with data, in turn providing feedback for future decision making. We illustrate these principles with the problem of mitigating the effects of climate change on terrestrial bird communities in the southern Appalachian Mountains, USA. ?? 2010 Elsevier Ltd.
ERIC Educational Resources Information Center
Smangs, Mattias
2010-01-01
This article explores the plausibility of the conflicting theoretical assumptions underlying the main criminological perspectives on juvenile delinquents, their peer relations and social skills: the social ability model, represented by Sutherland's theory of differential associations, and the social disability model, represented by Hirschi's…
ERIC Educational Resources Information Center
Paetkau, Mark
2007-01-01
One of my goals as an instructor is to teach students critical thinking skills. This paper presents an example of a student-led discussion of heat conduction at the first-year level. Heat loss from a human head is calculated using conduction and radiation models. The results of these plausible (but wrong) models of heat transfer contradict what…
A Defect Structure for 6-Line Ferrihydrite Nanoparticles (Invited)
NASA Astrophysics Data System (ADS)
Gilbert, B.; Spagnoli, D.; Fakra, S.; Petkov, V.; Penn, R. L.; Banfield, J. F.; Waychunas, G.
2010-12-01
Ferrihydrite is an environmental iron oxyhydroxide mineral that is only found in the form of nanoscale particles yet exerts significant impacts on the biogeochemistry of soils, sediments and surface waters. This material has remained poorly characterized due to significant experimental challenges in determining stoichiometry and structure. In a breakthrough, Michel et al., Science 316, 1726 (2007), showed that real-space pair distribution function (PDF) data from ferrihydrite samples with a range of particle sizes could be modeled by a single newly proposed crystal phase. However, ambiguity remained as to the relationship between this model and real ferrihydrite structure because that model does not perfectly reproduce the reciprocal-space X-ray diffraction data (XRD). Subsequently, Michel et al. PNAS 107, 2787 (2010), demonstrated that ferrihydrite could be thermally coarsened to form an annealed nanomaterial for which both XRD and PDF data are reproduced by a refined version of their original structure. These findings confirmed that the Michel et al. structure is a true mineral phase, but do not resolve the question of how to adequately describe the structure of ferrihydrite nanoparticles formed by low-temperature precipitation in surface waters. There is agreement that a model based upon a single unit cell cannot capture the structural diversity present in real nanoparticles, which can include defects, vacancies and disorder, particularly surface strain. However, for the Michel et al. model of ferrihydrite the disagreement between simulated and experimental XRD is significant, indicating either that the underlying structural model is incorrect; that the assumption that a single phase is sufficient to describe the nanomaterial is not valid; or that ferrihydrite nanoparticles possess an unusually large amount of disorder that must be characterized. Thus, quantitative tests of explicit structural configurations are essential to understand the real nanoparticle disorder and to test the correctness of an underlying phase described by a single unit cell. We reviewed the crystal chemistry of the Michel et al. structure and alternatives and developed hypotheses for plausible structural defects. We developed a novel reverse Monte Carlo (RMC) algorithm that generates defects and disorder within full-nanoparticle structural models and simulates the corresponding PDF and wide-angle XRD patterns for comparison with experimental data. This successfully generated full-nanoparticle structures that are in agreement with both real- and reciprocal-space X-ray scattering data. RMC-derived structures may be incorrect, and are not unique, and must therefore be evaluated for chemical plausibility as emphasized by Manceau, Clay Minerals 44, 19 (2009). Nevertheless, the results show that the inclusion of disorder and defects in full-nanoparticle model of ferrihydrite can resolve the discrepancy between XRD and PDF results found for a model based upon a single unit cell.
A model of proto-object based saliency
Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph
2013-01-01
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601
Dependence of prevalence of contiguous pathways in proteins on structural complexity.
Thayer, Kelly M; Galganov, Jesse C; Stein, Avram J
2017-01-01
Allostery is a regulatory mechanism in proteins where an effector molecule binds distal from an active site to modulate its activity. Allosteric signaling may occur via a continuous path of residues linking the active and allosteric sites, which has been suggested by large conformational changes evident in crystal structures. An alternate possibility is that the signal occurs in the realm of ensemble dynamics via an energy landscape change. While the latter was first proposed on theoretical grounds, increasing evidence suggests that such a control mechanism is plausible. A major difficulty for testing the two methods is the ability to definitively determine that a residue is directly involved in allosteric signal transduction. Statistical Coupling Analysis (SCA) is a method that has been successful at predicting pathways, and experimental tests involving mutagenesis or domain substitution provide the best available evidence of signaling pathways. However, ascertaining energetic pathways which need not be contiguous is far more difficult. To date, simple estimates of the statistical significance of a pathway in a protein remain to be established. The focus of this work is to estimate such benchmarks for the statistical significance of contiguous pathways for the null model of selecting residues at random. We found that when 20% of residues in proteins are randomly selected, contiguous pathways at the 6 Å cutoff level were found with success rates of 51% in PDZ, 30% in p53, and 3% in MutS. The results suggest that the significance of pathways may have system specific factors involved. Furthermore, the possible existence of false positives for contiguous pathways implies that signaling could be occurring via alternate routes including those consistent with the energetic landscape model.
Polylysine as a functional biopolymer to couple gold nanorods to tumor-tropic cells.
Borri, Claudia; Centi, Sonia; Ratto, Fulvio; Pini, Roberto
2018-05-31
The delivery of plasmonic particles, such as gold nanorods, to the tumor microenvironment has attracted much interest in biomedical optics for topical applications as the photoacoustic imaging and photothermal ablation of cancer. However, the systemic injection of free particles still crashes into a complexity of biological barriers, such as the reticuloendothelial system, that prevent their efficient biodistribution. In this context, the notion to exploit the inherent features of tumor-tropic cells for the creation of a Trojan horse is emerging as a plausible alternative. We report on a convenient approach to load cationic gold nanorods into murine macrophages that exhibit chemotactic sensitivity to track gradients of inflammatory stimuli. In particular, we compare a new model of poly-L-lysine-coated particles against two alternatives of cationic moieties that we have presented elsewhere, i.e. a small quaternary ammonium compound and an arginine-rich cell-penetrating peptide. Murine macrophages that are exposed to poly-L-lysine-coated gold nanorods at a dosage of 400 µM Au for 24 h undertake efficient uptake, i.e. around 3 pg Au per cell, retain the majority of their cargo until 24 h post-treatment and maintain around 90% of their pristine viability, chemotactic and pro-inflammatory functions. With respect to previous models of cationic coatings, poly-L-lysine is a competitive solution for the preparation of biological vehicles of gold nanorods, especially for applications that may require longer life span of the Trojan horse, say in the order of 24 h. This biopolymer combines the cost-effectiveness of small molecules and biocompatibility and efficiency of natural peptides and thus holds potential for translational developments.
Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N.
2015-01-01
Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics. PMID:25954306
Multiple regimes of robust patterns between network structure and biodiversity
NASA Astrophysics Data System (ADS)
Jover, Luis F.; Flores, Cesar O.; Cortez, Michael H.; Weitz, Joshua S.
2015-12-01
Ecological networks such as plant-pollinator and host-parasite networks have structured interactions that define who interacts with whom. The structure of interactions also shapes ecological and evolutionary dynamics. Yet, there is significant ongoing debate as to whether certain structures, e.g., nestedness, contribute positively, negatively or not at all to biodiversity. We contend that examining variation in life history traits is key to disentangling the potential relationship between network structure and biodiversity. Here, we do so by analyzing a dynamic model of virus-bacteria interactions across a spectrum of network structures. Consistent with prior studies, we find plausible parameter domains exhibiting strong, positive relationships between nestedness and biodiversity. Yet, the same model can exhibit negative relationships between nestedness and biodiversity when examined in a distinct, plausible region of parameter space. We discuss steps towards identifying when network structure could, on its own, drive the resilience, sustainability, and even conservation of ecological communities.
Multiple regimes of robust patterns between network structure and biodiversity
Jover, Luis F.; Flores, Cesar O.; Cortez, Michael H.; Weitz, Joshua S.
2015-01-01
Ecological networks such as plant-pollinator and host-parasite networks have structured interactions that define who interacts with whom. The structure of interactions also shapes ecological and evolutionary dynamics. Yet, there is significant ongoing debate as to whether certain structures, e.g., nestedness, contribute positively, negatively or not at all to biodiversity. We contend that examining variation in life history traits is key to disentangling the potential relationship between network structure and biodiversity. Here, we do so by analyzing a dynamic model of virus-bacteria interactions across a spectrum of network structures. Consistent with prior studies, we find plausible parameter domains exhibiting strong, positive relationships between nestedness and biodiversity. Yet, the same model can exhibit negative relationships between nestedness and biodiversity when examined in a distinct, plausible region of parameter space. We discuss steps towards identifying when network structure could, on its own, drive the resilience, sustainability, and even conservation of ecological communities. PMID:26632996
[Scenario analysis--a method for long-term planning].
Stavem, K
2000-01-10
Scenarios are known from the film industry, as detailed descriptions of films. This has given name to scenario analysis, a method for long term planning using descriptions of composite future pictures. This article is an introduction to the scenario method. Scenarios describe plausible, not necessarily probable, developments. They focus on problems and questions that decision makers must be aware of and prepare to deal with, and the consequences of alternative decisions. Scenarios are used in corporate and governmental planning, and they can be useful and complementary to traditional planning and extrapolation of past experience. The method is particularly useful in a rapidly changing world with shifting external conditions.
Western municipal water conservation policy: The case of disaggregated demand
NASA Astrophysics Data System (ADS)
Burness, Stuart; Chermak, Janie; Krause, Kate
2005-03-01
We investigate aspects of the felicity of both incentive-based and command and control policies in effecting municipal water conservation goals. When demand can be disaggregated according to uses or users, our results suggest that policy efforts be focused on the submarket wherein demand is more elastic. Under plausible consumer parameters, a household production function approach to water utilization prescribes the nature of demand elasticities in alternative uses and squares nicely with empirical results from the literature. An empirical example illustrates. Overall, given data and other informational limitations, extant institutional structures, and in situ technology, our analysis suggests a predisposition for command and control policies over incentive-based tools.
On the RNG theory of turbulence
NASA Technical Reports Server (NTRS)
Lam, S. H.
1992-01-01
The Yakhot and Orszag (1986) renormalization group (RNG) theory of turbulence has generated a number of scaling law constants in reasonable quantitative agreement with experiments. The theory itself is highly mathematical, and its assumptions and approximations are not easily appreciated. The present paper reviews the RNG theory and recasts it in more conventional terms using a distinctly different viewpoint. A new formulation based on an alternative interpretation of the origin of the random force is presented, showing that the artificially introduced epsilon in the original theory is an adjustable parameter, thus offering a plausible explanation for the remarkable record of quantitative success of the so-called epsilon-expansion procedure.
Winkler, Daniel; Zischg, Jonatan; Rauch, Wolfgang
2018-01-01
For communicating urban flood risk to authorities and the public, a realistic three-dimensional visual display is frequently more suitable than detailed flood maps. Virtual reality could also serve to plan short-term flooding interventions. We introduce here an alternative approach for simulating three-dimensional flooding dynamics in large- and small-scale urban scenes by reaching out to computer graphics. This approach, denoted 'particle in cell', is a particle-based CFD method that is used to predict physically plausible results instead of accurate flow dynamics. We exemplify the approach for the real flooding event in July 2016 in Innsbruck.
The moments of inertia of Mars
NASA Technical Reports Server (NTRS)
Bills, Bruce G.
1989-01-01
The mean moment of inertia of Mars is, at present, very poorly constrained. The generally accepted value of 0.365 M(R-squared) is obtained by assuming that the observed second degree gravity field can be decomposed into a hydrostatic oblate spheroid and a nonhydrostatic prolate spheroid with an equatorial axis of symmetry. An alternative decomposition is advocated in the present analysis. If the nonhydrostatic component is a maximally triaxial ellipsoid (intermediate moment exactly midway between greatest and least), the hydrostatic component is consistent with a mean moment of 0.345 M(R-squared). The plausibility of this decomposition is supported by statistical arguments and comparison with the earth, moon and Venus.
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley
2014-07-01
Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and methods through applications to representative atomic structures and we discuss extensions to the validation process for molecular models of polymer structures encountered in certain semiconductor nanomanufacturing processes. The powerful method of model plausibility as a means for selecting interaction potentials for coarse-grained models is discussed in connection with a coarse-grained hexane molecule. Discussions of how all-atom information is used to construct priors are contained in an appendix.
Friedberg, Mark W; Martsolf, Grant R; White, Chapin; Auerbach, David I; Kandrack, Ryan; Reid, Rachel O; Butcher, Emily; Yu, Hao; Hollands, Simon; Nie, Xiaoyu
2017-01-01
The Washington State legislature has recently considered several policy options to address a perceived shortage of primary care physicians in rural Washington. These policy options include opening the new Elson S. Floyd College of Medicine at Washington State University in 2017; increasing the number of primary care residency positions in the state; expanding educational loan-repayment incentives to encourage primary care physicians to practice in rural Washington; increasing Medicaid payment rates for primary care physicians in rural Washington; and encouraging the adoption of alternative models of primary care, such as medical homes and nurse-managed health centers, that reallocate work from physicians to nurse practitioners (NPs) and physician assistants (PAs). RAND Corporation researchers projected the effects that these and other policy options could have on the state's rural primary care workforce through 2025. They project a 7-percent decrease in the number of rural primary care physicians and a 5-percent decrease in the number of urban ones. None of the policy options modeled in this study, on its own, will offset this expected decrease by relying on physicians alone. However, combinations of these strategies or partial reallocation of rural primary care services to NPs and PAs via such new practice models as medical homes and nurse-managed health centers are plausible options for preserving the overall availability of primary care services in rural Washington through 2025.
Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-07-01
Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Peters, Susan; Kromhout, Hans; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Vermeulen, Roel
2013-01-01
We describe the elaboration and sensitivity analyses of a quantitative job-exposure matrix (SYN-JEM) for respirable crystalline silica (RCS). The aim was to gain insight into the robustness of the SYN-JEM RCS estimates based on critical decisions taken in the elaboration process. SYN-JEM for RCS exposure consists of three axes (job, region, and year) based on estimates derived from a previously developed statistical model. To elaborate SYN-JEM, several decisions were taken: i.e. the application of (i) a single time trend; (ii) region-specific adjustments in RCS exposure; and (iii) a prior job-specific exposure level (by the semi-quantitative DOM-JEM), with an override of 0 mg/m(3) for jobs a priori defined as non-exposed. Furthermore, we assumed that exposure levels reached a ceiling in 1960 and remained constant prior to this date. We applied SYN-JEM to the occupational histories of subjects from a large international pooled community-based case-control study. Cumulative exposure levels derived with SYN-JEM were compared with those from alternative models, described by Pearson correlation ((Rp)) and differences in unit of exposure (mg/m(3)-year). Alternative models concerned changes in application of job- and region-specific estimates and exposure ceiling, and omitting the a priori exposure ranking. Cumulative exposure levels for the study subjects ranged from 0.01 to 60 mg/m(3)-years, with a median of 1.76 mg/m(3)-years. Exposure levels derived from SYN-JEM and alternative models were overall highly correlated (R(p) > 0.90), although somewhat lower when omitting the region estimate ((Rp) = 0.80) or not taking into account the assigned semi-quantitative exposure level (R(p) = 0.65). Modification of the time trend (i.e. exposure ceiling at 1950 or 1970, or assuming a decline before 1960) caused the largest changes in absolute exposure levels (26-33% difference), but without changing the relative ranking ((Rp) = 0.99). Exposure estimates derived from SYN-JEM appeared to be plausible compared with (historical) levels described in the literature. Decisions taken in the development of SYN-JEM did not critically change the cumulative exposure levels. The influence of region-specific estimates needs to be explored in future risk analyses.
NASA Astrophysics Data System (ADS)
Renssen, Hans; Mairesse, Aurélien; Goosse, Hugues; Mathiot, Pierre; Heiri, Oliver; Roche, Didier M.; Nisancioglu, Kerim H.; Valdes, Paul J.
2016-04-01
The Younger Dryas cooling event disrupted the overall warming trend in the North Atlantic region during the last deglaciation. Climate change during the Younger Dryas was abrupt, and thus provides insights into the sensitivity of the climate system to perturbations. The sudden Younger Dryas cooling has traditionally been attributed to a shut-down of the Atlantic meridional overturning circulation by meltwater discharges. However, alternative explanations such as strong negative radiative forcing and a shift in atmospheric circulation have also been offered. In this study we investigate the importance of these different forcings in coupled climate model experiments constrained by data assimilation. We find that the Younger Dryas climate signal as registered in proxy evidence is best simulated using a combination of processes: a weakened Atlantic meridional overturning circulation, moderate negative radiative forcing and an altered atmospheric circulation. We conclude that none of the individual mechanisms alone provide a plausible explanation for the Younger Dryas cold period. We suggest that the triggers for abrupt climate changes like the Younger Dryas are more complex than suggested so far, and that studies on the response of the climate system to perturbations should account for this complexity. Reference: Renssen, H. et al. (2015) Multiple causes of the Younger Dryas cold period. Nature Geoscience 8, 946-949.
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
Arenavirus budding resulting from viral-protein-associated cell membrane curvature
Schley, David; Whittaker, Robert J.; Neuman, Benjamin W.
2013-01-01
Viral replication occurs within cells, with release (and onward infection) primarily achieved through two alternative mechanisms: lysis, in which virions emerge as the infected cell dies and bursts open; or budding, in which virions emerge gradually from a still living cell by appropriating a small part of the cell membrane. Virus budding is a poorly understood process that challenges current models of vesicle formation. Here, a plausible mechanism for arenavirus budding is presented, building on recent evidence that viral proteins embed in the inner lipid layer of the cell membrane. Experimental results confirm that viral protein is associated with increased membrane curvature, whereas a mathematical model is used to show that localized increases in curvature alone are sufficient to generate viral buds. The magnitude of the protein-induced curvature is calculated from the size of the amphipathic region hypothetically removed from the inner membrane as a result of translation, with a change in membrane stiffness estimated from observed differences in virion deformation as a result of protein depletion. Numerical results are based on experimental data and estimates for three arenaviruses, but the mechanisms described are more broadly applicable. The hypothesized mechanism is shown to be sufficient to generate spontaneous budding that matches well both qualitatively and quantitatively with experimental observations. PMID:23864502
Balancing selection in species with separate sexes: insights from Fisher's geometric model.
Connallon, Tim; Clark, Andrew G
2014-07-01
How common is balancing selection, and what fraction of phenotypic variance is attributable to balanced polymorphisms? Despite decades of research, answers to these questions remain elusive. Moreover, there is no clear theoretical prediction about the frequency with which balancing selection is expected to arise within a population. Here, we use an extension of Fisher's geometric model of adaptation to predict the probability of balancing selection in a population with separate sexes, wherein polymorphism is potentially maintained by two forms of balancing selection: (1) heterozygote advantage, where heterozygous individuals at a locus have higher fitness than homozygous individuals, and (2) sexually antagonistic selection (a.k.a. intralocus sexual conflict), where the fitness of each sex is maximized by different genotypes at a locus. We show that balancing selection is common under biologically plausible conditions and that sex differences in selection or sex-by-genotype effects of mutations can each increase opportunities for balancing selection. Although heterozygote advantage and sexual antagonism represent alternative mechanisms for maintaining polymorphism, they mutually exist along a balancing selection continuum that depends on population and sex-specific parameters of selection and mutation. Sexual antagonism is the dominant mode of balancing selection across most of this continuum. Copyright © 2014 by the Genetics Society of America.
A rationale for long-lived quarks and leptons at the LHC: low energy flavour theory
NASA Astrophysics Data System (ADS)
Éboli, O. J. P.; Savoy, C. A.; Funchal, R. Zukanovich
2012-02-01
In the framework of gauged flavour symmetries, new fermions in parity symmetric representations of the standard model are generically needed for the compensation of mixed anomalies. The key point is that their masses are also protected by flavour symmetries and some of them are expected to lie way below the flavour symmetry breaking scale(s), which has to occur many orders of magnitude above the electroweak scale to be compatible with the available data from flavour changing neutral currents and CP violation experiments. We argue that, actually, some of these fermions would plausibly get masses within the LHC range. If they are taken to be heavy quarks and leptons, in (bi)-fundamental representations of the standard model symmetries, their mixings with the light ones are strongly constrained to be very small by electroweak precision data. The alternative chosen here is to exactly forbid such mixings by breaking of flavour symmetries into an exact discrete symmetry, the so-called proton-hexality, primarily suggested to avoid proton decay. As a consequence of the large value needed for the flavour breaking scale, those heavy particles are long-lived and rather appropriate for the current and future searches at the LHC for quasi-stable hadrons and leptons. In fact, the LHC experiments have already started to look for them.
Matamales, Miriam
2012-01-01
Synaptic activity can trigger gene expression programs that are required for the stable change of neuronal properties, a process that is essential for learning and memory. Currently, it is still unclear how the stimulation of dendritic synapses can be coupled to transcription in the nucleus in a timely way given that large distances can separate these two cellular compartments. Although several mechanisms have been proposed to explain long distance communication between synapses and the nucleus, the possible co-existence of these models and their relevance in physiological conditions remain elusive. One model suggests that synaptic activation triggers the translocation to the nucleus of certain transcription regulators localised at postsynaptic sites that function as synapto-nuclear messengers. Alternatively, it has been hypothesised that synaptic activity initiates propagating regenerative intracellular calcium waves that spread through dendrites into the nucleus where nuclear transcription machinery is thereby regulated. It has also been postulated that membrane depolarisation of voltage-gated calcium channels on the somatic membrane is sufficient to increase intracellular calcium concentration and activate transcription without the need for transported signals from distant synapses. Here I provide a critical overview of the suggested mechanisms for coupling synaptic stimulation to transcription, the underlying assumptions behind them and their plausible physiological significance. PMID:24327840
Matamales, Miriam
2012-12-19
Synaptic activity can trigger gene expression programs that are required for the stable change of neuronal properties, a process that is essential for learning and memory. Currently, it is still unclear how the stimulation of dendritic synapses can be coupled to transcription in the nucleus in a timely way given that large distances can separate these two cellular compartments. Although several mechanisms have been proposed to explain long distance communication between synapses and the nucleus, the possible co-existence of these models and their relevance in physiological conditions remain elusive. One model suggests that synaptic activation triggers the translocation to the nucleus of certain transcription regulators localised at postsynaptic sites that function as synapto-nuclear messengers. Alternatively, it has been hypothesised that synaptic activity initiates propagating regenerative intracellular calcium waves that spread through dendrites into the nucleus where nuclear transcription machinery is thereby regulated. It has also been postulated that membrane depolarisation of voltage-gated calcium channels on the somatic membrane is sufficient to increase intracellular calcium concentration and activate transcription without the need for transported signals from distant synapses. Here I provide a critical overview of the suggested mechanisms for coupling synaptic stimulation to transcription, the underlying assumptions behind them and their plausible physiological significance.
Matamales, Miriam
2012-01-01
Synaptic activity can trigger gene expression programs that are required for the stable change of neuronal properties, a process that is essential for learning and memory. Currently, it is still unclear how the stimulation of dendritic synapses can be coupled to transcription in the nucleus in a timely way given that large distances can separate these two cellular compartments. Although several mechanisms have been proposed to explain long distance communication between synapses and the nucleus, the possible co-existence of these models and their relevance in physiological conditions remain elusive. One model suggests that synaptic activation triggers the translocation to the nucleus of certain transcription regulators localised at postsynaptic sites that function as synapto-nuclear messengers. Alternatively, it has been hypothesised that synaptic activity initiates propagating regenerative intracellular calcium waves that spread through dendrites into the nucleus where nuclear transcription machinery is thereby regulated. It has also been postulated that membrane depolarisation of voltage-gated calcium channels on the somatic membrane is sufficient to increase intracellular calcium concentration and activate transcription without the need for transported signals from distant synapses. Here I provide a critical overview of the suggested mechanisms for coupling synaptic stimulation to transcription, the underlying assumptions behind them and their plausible physiological significance.
NASA Astrophysics Data System (ADS)
Mauritsen, T.; Stevens, B. B.
2015-12-01
Current climate models exhibit equilibrium climate sensitivities to a doubling of CO2 of 2.0-4.6 K and a weak increase of global mean precipitation. But inferences from the observational record place climate sensitivity near the lower end of the range, and indicate that models underestimate changes in certain aspects of the hydrological cycle under warming. Here we show that both these discrepancies can be explained by a controversial hypothesis of missing negative tropical feedbacks in climate models, known as the iris-effect: Expanding dry and clear regions in a warming climate yield a negative feedback as more infrared radiation can escape to space through this metaphorical opening iris. At the same time the additional infrared cooling of the atmosphere must be balanced by latent heat release thereby accelerating the hydrological cycle. Alternative suggestions of too little aerosol cooling, missing volcanic eruptions, or insufficient ocean heat uptake in models may explain a slow observed transient warming, but are not able to explain the observed enhanced hydrological cycle. We propose that a temperature-dependency of the extent to which precipitating convective clouds cluster or aggregate into larger clouds constitutes a plausible physical mechanism for the iris-effect. On a large scale, organized convective states are dryer than disorganized convection and therefore radiate more in the longwave to space. Thus, if a warmer atmosphere can host more organized convection, then this represents one possible mechanism for an iris-effect. The challenges in modeling, understanding and possibly quantifying a temperature-dependency of convection are, however, substantial.
Encke, Jörg; Hemmert, Werner
2018-01-01
The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
ERIC Educational Resources Information Center
Gunnoe, Marjorie Lindner; Mariner, Carrie Lea
Researchers who employ contextual models of parenting contend that it is not spanking per se, but rather the context in which spanking occurs and the meanings children ascribe to spanking, that predict child outcomes. This study proposed two plausible meanings that children may ascribe to spanking--a legitimate expression of parental authority or…
ERIC Educational Resources Information Center
Dougherty, Michael R.; Franco-Watkins, Ana M.; Thomas, Rick
2008-01-01
The theory of probabilistic mental models (PMM; G. Gigerenzer, U. Hoffrage, & H. Kleinbolting, 1991) has had a major influence on the field of judgment and decision making, with the most recent important modifications to PMM theory being the identification of several fast and frugal heuristics (G. Gigerenzer & D. G. Goldstein, 1996). These…
ERIC Educational Resources Information Center
Mavritsaki, Eirini; Heinke, Dietmar; Allen, Harriet; Deco, Gustavo; Humphreys, Glyn W.
2011-01-01
We present the case for a role of biologically plausible neural network modeling in bridging the gap between physiology and behavior. We argue that spiking-level networks can allow "vertical" translation between physiological properties of neural systems and emergent "whole-system" performance--enabling psychological results to be simulated from…
Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2009-01-01
A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…
Plausibility and the Theoreticians' Regress: Constructing the evolutionary fate of stars
NASA Astrophysics Data System (ADS)
Ipe, Alex Ike
2002-10-01
This project presents a case-study of a scientific controversy that occurred in theoretical astrophysics nearly seventy years ago following the conceptual discovery of a novel phenomenon relating to the evolution and structure of stellar matter, known as the limiting mass. The ensuing debate between the author of the finding, Subrahmanyan Chandrasekhar and his primary critic, Arthur Stanley Eddington, witnessed both scientists trying to convince one another, as well as the astrophysical community, that their respective positions on the issue was the correct one. Since there was no independent criterion—that is, no observational evidence—at the time of the dispute that could have been drawn upon to test the validity of the limiting mass concept, a logical, objective resolution to the controversy was not possible. In this respect, I argue that the dynamics of the Chandrasekhar-Eddington debate succinctly resonates with Kennefick's notion of the Theoreticians' Regress. However, whereas this model predicts that such a regress can be broken if both parties in a dispute come to agree on who was in error and collaborate on a calculation whose technical foundation can be agreed to, I argue that a more pragmatic path by which the Theoreticians' Regress is broken is when one side in a dispute is able to construct its argument as being more plausible than that of its opponent, and is so successful in doing so, that its opposition is subsequently forced to withdraw from the debate. In order to adequately deal with the construction of plausibility in the context of scientific controversies, I draw upon Harvey's Plausibility Model as well as Pickering's work on the role socio-cultural factors play in the resolution of intellectual disputes. It is believed that the ideas embedded in these social- relativist-constructivist perspectives provide the most parsimonious explanation as to the reasons for the genesis and ultimate closure of this particular scientific controversy.
Wilson, Tamara; Sleeter, Benjamin M.; Sherba, Jason T.; Dick Cameron,
2015-01-01
Human land use will increasingly contribute to habitat loss and water shortages in California, given future population projections and associated land-use demand. Understanding how land-use change may impact future water use and where existing protected areas may be threatened by land-use conversion will be important if effective, sustainable management approaches are to be implemented. We used a state-and-transition simulation modeling (STSM) framework to simulate spatially-explicit (1 km2) historical (1992-2010) and future (2011-2060) land-use change for 52 California counties within Mediterranean California ecoregions. Historical land use and land cover (LULC) change estimates were derived from the Farmland Mapping and Monitoring Program dataset and attributed with county-level agricultural water-use data from the California Department of Water Resources. Five future alternative land-use scenarios were developed and modeled using the historical land-use change estimates and land-use projections based on the Intergovernmental Panel on Climate Change's Special Report on Emission Scenarios A2 and B1 scenarios. Spatial land-use transition outputs across scenarios were combined to reveal scenario agreement and a land conversion threat index was developed to evaluate vulnerability of existing protected areas to proximal land conversion. By 2060, highest LULC conversion threats were projected to impact nearly 10,500 km2 of land area within 10 km of a protected area boundary and over 18,000 km2 of land area within essential habitat connectivity areas. Agricultural water use declined across all scenarios perpetuating historical drought-related land use from 2008-2010 and trends of annual cropland conversion into perennial woody crops. STSM is useful in analyzing land-use related impacts on water resource use as well as potential threats to existing protected land. Exploring a range of alternative, yet plausible, LULC change impacts will help to better inform resource management and mitigation strategies.
Howard, Ian S.; Messum, Piers
2014-01-01
Words are made up of speech sounds. Almost all accounts of child speech development assume that children learn the pronunciation of first language (L1) speech sounds by imitation, most claiming that the child performs some kind of auditory matching to the elements of ambient speech. However, there is evidence to support an alternative account and we investigate the non-imitative child behavior and well-attested caregiver behavior that this account posits using Elija, a computational model of an infant. Through unsupervised active learning, Elija began by discovering motor patterns, which produced sounds. In separate interaction experiments, native speakers of English, French and German then played the role of his caregiver. In their first interactions with Elija, they were allowed to respond to his sounds if they felt this was natural. We analyzed the interactions through phonemic transcriptions of the caregivers' utterances and found that they interpreted his output within the framework of their native languages. Their form of response was almost always a reformulation of Elija's utterance into well-formed sounds of L1. Elija retained those motor patterns to which a caregiver responded and formed associations between his motor pattern and the response it provoked. Thus in a second phase of interaction, he was able to parse input utterances in terms of the caregiver responses he had heard previously, and respond using his associated motor patterns. This capacity enabled the caregivers to teach Elija to pronounce some simple words in their native languages, by his serial imitation of the words' component speech sounds. Overall, our results demonstrate that the natural responses and behaviors of human subjects to infant-like vocalizations can take a computational model from a biologically plausible initial state through to word pronunciation. This provides support for an alternative to current auditory matching hypotheses for how children learn to pronounce. PMID:25333740
NASA Astrophysics Data System (ADS)
Miller, B. W.; Schuurman, G. W.; Symstad, A.; Fisichelli, N. A.; Frid, L.
2017-12-01
Managing natural resources in this era of anthropogenic climate change is fraught with uncertainties around how ecosystems will respond to management actions and a changing climate. Scenario planning (oftentimes implemented as a qualitative, participatory exercise for exploring multiple possible futures) is a valuable tool for addressing this challenge. However, this approach may face limits in resolving responses of complex systems to altered climate and management conditions, and may not provide the scientific credibility that managers often require to support actions that depart from current practice. Quantitative information on projected climate changes and ecological responses is rapidly growing and evolving, but this information is often not at a scale or in a form that is `actionable' for resource managers. We describe a project that sought to create usable information for resource managers in the northern Great Plains by combining qualitative and quantitative methods. In particular, researchers, resource managers, and climate adaptation specialists co-produced a simulation model in conjunction with scenario planning workshops to inform natural resource management in southwest South Dakota. Scenario planning for a wide range of resources facilitated open-minded thinking about a set of divergent and challenging, yet relevant and plausible, climate scenarios and management alternatives that could be implemented in the simulation. With stakeholder input throughout the process, we built a simulation of key vegetation types, grazing, exotic plants, fire, and the effects of climate and management on rangeland productivity and composition. By simulating multiple land management jurisdictions, climate scenarios, and management alternatives, the model highlighted important tradeoffs between herd sizes and vegetation composition, and between the short- versus long-term costs of invasive species management. It also identified impactful uncertainties related to the effects of fire and grazing on vegetation. Ultimately, this integrative and iterative approach yielded counter-intuitive and surprising findings, and resulted in a more tractable set of possible futures for resource management planning.
Pilgrims sailing the Titanic: plausibility effects on memory for misinformation.
Hinze, Scott R; Slaten, Daniel G; Horton, William S; Jenkins, Ryan; Rapp, David N
2014-02-01
People rely on information they read even when it is inaccurate (Marsh, Meade, & Roediger, Journal of Memory and Language 49:519-536, 2003), but how ubiquitous is this phenomenon? In two experiments, we investigated whether this tendency to encode and rely on inaccuracies from text might be influenced by the plausibility of misinformation. In Experiment 1, we presented stories containing inaccurate plausible statements (e.g., "The Pilgrims' ship was the Godspeed"), inaccurate implausible statements (e.g., . . . the Titanic), or accurate statements (e.g., . . . the Mayflower). On a subsequent test of general knowledge, participants relied significantly less on implausible than on plausible inaccuracies from the texts but continued to rely on accurate information. In Experiment 2, we replicated these results with the addition of a think-aloud procedure to elicit information about readers' noticing and evaluative processes for plausible and implausible misinformation. Participants indicated more skepticism and less acceptance of implausible than of plausible inaccuracies. In contrast, they often failed to notice, completely ignored, and at times even explicitly accepted the misinformation provided by plausible lures. These results offer insight into the conditions under which reliance on inaccurate information occurs and suggest potential mechanisms that may underlie reported misinformation effects.
Phthalates impact human health: Epidemiological evidences and plausible mechanism of action.
Benjamin, Sailas; Masai, Eiji; Kamimura, Naofumi; Takahashi, Kenji; Anderson, Robin C; Faisal, Panichikkal Abdul
2017-10-15
Disregarding the rising alarm on the hazardous nature of various phthalates and their metabolites, ruthless usage of phthalates as plasticizer in plastics and as additives in innumerable consumer products continues due low their cost, attractive properties, and lack of suitable alternatives. Globally, in silico computational, in vitro mechanistic, in vivo preclinical and limited clinical or epidemiological human studies showed that over a dozen phthalates and their metabolites ingested passively by man from the general environment, foods, drinks, breathing air, and routine household products cause various dysfunctions. Thus, this review addresses the health hazards posed by phthalates on children and adolescents, epigenetic modulation, reproductive toxicity in women and men; insulin resistance and type II diabetes; overweight and obesity, skeletal anomalies, allergy and asthma, cancer, etc., coupled with the description of major phthalates and their general uses, phthalate exposure routes, biomonitoring and risk assessment, special account on endocrine disruption; and finally, a plausible molecular cross-talk with a unique mechanism of action. This clinically focused comprehensive review on the hazards of phthalates would benefit the general population, academia, scientists, clinicians, environmentalists, and law or policy makers to decide upon whether usage of phthalates to be continued swiftly without sufficient deceleration or regulated by law or to be phased out from earth forever. Copyright © 2017. Published by Elsevier B.V.
The Effect of Complementary and Alternative Medicine on Subfertile Women with In Vitro Fertilization
Zhang, Yuehui; Fu, Yiman; Han, Fengjuan; Kuang, Hongying; Hu, Min; Wu, Xiaoke
2014-01-01
About 10–15% of couples have difficulty conceiving at some point in their reproductive lives and thus have to seek specialist fertility care. One of the most commonly used treatment options is in vitro fertilization (IVF) and its related expansions. Despite many recent technological advances, the average IVF live birth rate per single initiated cycle is still only 30%. Consequently, there is a need to find new therapies to promote the efficiency of the procedure. Many patients have turned to complementary and alternative medical (CAM) treatments as an adjuvant therapy to improve their chances of success when they undergo IVF treatment. At present, several CAM methods have been used in infertile couples with IVF, which has achieved obvious effects. However, biologically plausible mechanisms of the action of CAM for IVF have not been systematically reviewed. This review briefly summarizes the current progress of the impact of CAM on the outcomes of IVF and introduces the mechanisms. PMID:24527047
MinCD cell division proteins form alternating co-polymeric cytomotive filaments
Ghosal, Debnath; Trambaiolo, Daniel; Amos, Linda A.; Löwe, Jan
2014-01-01
Summary During bacterial cell division, filaments of the tubulin-like protein FtsZ assemble at midcell to form the cytokinetic Z-ring. Its positioning is regulated by the oscillation of MinCDE proteins. MinC is activated by MinD through an unknown mechanism and prevents Z-ring assembly anywhere but midcell. Here, using X-ray crystallography, electron microscopy and in vivo analyses we show that MinD activates MinC by forming a new class of alternating copolymeric filaments that show similarity to eukaryotic septin filaments A non-polymerising mutation in MinD causes aberrant cell division in E. coli. MinCD copolymers bind to membrane, interact with FtsZ, and are disassembled by MinE. Imaging a functional msfGFP-MinC fusion protein in MinE deleted cells reveals filamentous structures. EM imaging of our reconstitution of the MinCD-FtsZ interaction on liposome surfaces reveals a plausible mechanism for regulation of FtsZ ring assembly by MinCD copolymers. PMID:25500731
Toward a Responsibility-Catering Prioritarian Ethical Theory of Risk.
Wikman-Svahn, Per; Lindblom, Lars
2018-03-05
Standard tools used in societal risk management such as probabilistic risk analysis or cost-benefit analysis typically define risks in terms of only probabilities and consequences and assume a utilitarian approach to ethics that aims to maximize expected utility. The philosopher Carl F. Cranor has argued against this view by devising a list of plausible aspects of the acceptability of risks that points towards a non-consequentialist ethical theory of societal risk management. This paper revisits Cranor's list to argue that the alternative ethical theory responsibility-catering prioritarianism can accommodate the aspects identified by Cranor and that the elements in the list can be used to inform the details of how to view risks within this theory. An approach towards operationalizing the theory is proposed based on a prioritarian social welfare function that operates on responsibility-adjusted utilities. A responsibility-catering prioritarian ethical approach towards managing risks is a promising alternative to standard tools such as cost-benefit analysis.
NASA Astrophysics Data System (ADS)
Farrow, Scott; Scott, Michael
2013-05-01
Floods are risky events ranging from small to catastrophic. Although expected flood damages are frequently used for economic policy analysis, alternative measures such as option price (OP) and cumulative prospect value exist. The empirical magnitude of these measures whose theoretical preference is ambiguous is investigated using case study data from Baltimore City. The outcome for the base case OP measure increases mean willingness to pay over the expected damage value by about 3%, a value which is increased with greater risk aversion, reduced by increased wealth, and only slightly altered by higher limits of integration. The base measure based on cumulative prospect theory is about 46% less than expected damages with estimates declining when alternative parameters are used. The method of aggregation is shown to be important in the cumulative prospect case which can lead to an estimate up to 41% larger than expected damages. Expected damages remain a plausible and the most easily computed measure for analysts.
Kim, Sun-Young; Song, Insang
2017-07-01
The limited spatial coverage of the air pollution data available from regulatory air quality monitoring networks hampers national-scale epidemiological studies of air pollution. The present study aimed to develop a national-scale exposure prediction model for estimating annual average concentrations of PM 10 and NO 2 at residences in South Korea using regulatory monitoring data for 2010. Using hourly measurements of PM 10 and NO 2 at 277 regulatory monitoring sites, we calculated the annual average concentrations at each site. We also computed 322 geographic variables in order to represent plausible local and regional pollution sources. Using these data, we developed universal kriging models, including three summary predictors estimated by partial least squares (PLS). The model performance was evaluated with fivefold cross-validation. In sensitivity analyses, we compared our approach with two alternative approaches, which added regional interactions and replaced the PLS predictors with up to ten selected variables. Finally, we predicted the annual average concentrations of PM 10 and NO 2 at 83,463 centroids of residential census output areas in South Korea to investigate the population exposure to these pollutants and to compare the exposure levels between monitored and unmonitored areas. The means of the annual average concentrations of PM 10 and NO 2 for 2010, across regulatory monitoring sites in South Korea, were 51.63 μg/m3 (SD = 8.58) and 25.64 ppb (11.05), respectively. The universal kriging exposure prediction models yielded cross-validated R 2 s of 0.45 and 0.82 for PM 10 and NO 2 , respectively. Compared to our model, the two alternative approaches gave consistent or worse performances. Population exposure levels in unmonitored areas were lower than in monitored areas. This is the first study that focused on developing a national-scale point wise exposure prediction approach in South Korea, which will allow national exposure assessments and epidemiological research to answer policy-related questions and to draw comparisons among different countries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effects of a proposed rural dental school on regional dental workforce and access to care.
Wanchek, Tanya N; Rephann, Terance J
2013-01-01
Southwest Virginia is a rural, low-income region with a relatively small dentist workforce and poor oral health outcomes. The opening of a dental school in the region has been proposed by policy-makers as one approach to improving the size of the dentist workforce and oral health outcomes. A policy simulation was conducted to assess how a hypothetical dental school in rural Southwest Virginia would affect the availability of dentists and utilization levels of dental services. The simulation focuses on two channels through which the dental school would most likely affect the region. First, the number of graduates who are expected to remain in the region was varied, based on the extensiveness of the education pipeline used to attract local students. Second, the number of patients treated in the dental school clinic under different dental school clinical models, including the traditional model, a patient-centered clinic model and a community-based clinic model, was varied in the simulation to obtain a range of additional dentists and utilization rates under differing dental school models. Under a set of plausible assumptions, the low yield scenario (ie private school with a traditional clinic) would result in three additional dentists residing in the region and a total of 8090 additional underserved patients receiving care. Under the high yield scenario (ie dental pipeline program with community based clinics) nine new dentists would reside in the region and as many as 18 054 underserved patients would receive care. Even with the high yield scenario and the strong assumption that these patients would not otherwise access care, the utilization rate increases to 68.9% from its current 60.1%. While the new dental school in Southwest Virginia would increase the dentist workforce and utilization rates, the high cost combined with the continued low rate of dental utilization suggests that there may be more effective alternatives to improving oral health in rural areas. Alternative policies that have shown considerable promise in expanding access to disadvantaged populations include virtual dental homes, enhanced Medicaid reimbursement programs, and school-based dental care systems.
Incorporating Alternative Care Site Characteristics Into Estimates of Substitutable ED Visits.
Trueger, Nathan Seth; Chua, Kao-Ping; Hussain, Aamir; Liferidge, Aisha T; Pitts, Stephen R; Pines, Jesse M
2017-07-01
Several recent efforts to improve health care value have focused on reducing emergency department (ED) visits that potentially could be treated in alternative care sites (ie, primary care offices, retail clinics, and urgent care centers). Estimates of the number of these visits may depend on assumptions regarding the operating hours and functional capabilities of alternative care sites. However, methods to account for the variability in these characteristics have not been developed. To develop methods to incorporate the variability in alternative care site characteristics into estimates of ED visit "substitutability." Our approach uses the range of hours and capabilities among alternative care sites to estimate lower and upper bounds of ED visit substitutability. We constructed "basic" and "extended" criteria that captured the plausible degree of variation in each site's hours and capabilities. To illustrate our approach, we analyzed data from 22,697 ED visits by adults in the 2011 National Hospital Ambulatory Medical Care Survey, defining a visit as substitutable if it was treat-and-release and met both the operating hours and functional capabilities criteria. Use of the combined basic hours/basic capabilities criteria and extended hours/extended capabilities generated lower and upper bounds of estimates. Our criteria classified 5.5%-27.1%, 7.6%-20.4%, and 10.6%-46.0% of visits as substitutable in primary care offices, retail clinics, and urgent care centers, respectively. Alternative care sites vary widely in operating hours and functional capabilities. Methods such as ours may help incorporate this variability into estimates of ED visit substitutability.
Eocene Paleoclimate: Incredible or Uncredible? Model data syntheses raise questions.
NASA Astrophysics Data System (ADS)
Huber, M.
2012-04-01
Reconstructions of Eocene paleoclimate have pushed on the boundaries of climate dynamics theory for generations. While significant improvements in theory and models have brought them closer to the proxy data, the data themselves have shifted considerably. Tropical temperatures and greenhouse gas concentrations are now reconstructed to be higher than once thought--in agreement with models--but, many polar temperature reconstructions are even warmer than the eye popping numbers from only a decade ago. These interpretations of subtropical-to-tropical polar conditions once again challenge models and theory. But, the devil, is as always in the details and it is worthwhile to consider the range of potential uncertainties and biases in the paleoclimate record interpretations to evaluate the proposition that models and data may not materially disagree. It is necessary to ask whether current Eocene paleoclimate reconstructions are accurate enough to compellingly argue for a complete failure of climate models and theory. Careful consideration of Eocene model output and proxy data reveals that over most of the Earth the model agrees with the upper range of plausible tropical proxy data and the lower range of plausible high latitude proxy reconstructions. Implications for the sensitivity of global climate to greenhouse gas forcing are drawn for a range of potential Eocene climate scenarios ranging from a literal interpretation of one particular model to a literal interpretation of proxy data. Hope for a middle ground is found.
A Stochastic Model of Plausibility in Live Virtual Constructive Environments
2017-09-14
objective in virtual environment research and design is the maintenance of adequate consistency levels in the face of limited system resources such as...provides some commentary with regard to system design considerations and future research directions. II. SYSTEM MODEL DVEs are often designed as a...exceed the system’s requirements. Research into predictive models of virtual environment consistency is needed to provide designers the tools to
A One-System Theory Which is Not Propositional.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2009-04-01
We argue that the propositional and link-based approaches to human contingency learning represent different levels of analysis because propositional reasoning requires a basis, which is plausibly provided by a link-based architecture. Moreover, in their attempt to compare two general classes of models (link-based and propositional), Mitchell et al. have referred to only two generic models and ignore the large variety of different models within each class.
Modeling Collective Animal Behavior with a Cognitive Perspective: A Methodological Framework
Weitz, Sebastian; Blanco, Stéphane; Fournier, Richard; Gautrais, Jacques; Jost, Christian; Theraulaz, Guy
2012-01-01
The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the benefits expected during the often long and difficult process of refining a behavioral model, designing adapted experimental protocols and inversing model parameters. PMID:22761685
Modeling collective animal behavior with a cognitive perspective: a methodological framework.
Weitz, Sebastian; Blanco, Stéphane; Fournier, Richard; Gautrais, Jacques; Jost, Christian; Theraulaz, Guy
2012-01-01
The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the benefits expected during the often long and difficult process of refining a behavioral model, designing adapted experimental protocols and inversing model parameters.
Entrainment to the CIECAM02 and CIELAB colour appearance models in the human cortex.
Thwaites, Andrew; Wingfield, Cai; Wieser, Eric; Soltan, Andrew; Marslen-Wilson, William D; Nimmo-Smith, Ian
2018-04-01
In human visual processing, information from the visual field passes through numerous transformations before perceptual attributes such as colour are derived. The sequence of transforms involved in constructing perceptions of colour can be approximated by colour appearance models such as the CIE (2002) colour appearance model, abbreviated as CIECAM02. In this study, we test the plausibility of CIECAM02 as a model of colour processing by looking for evidence of its cortical entrainment. The CIECAM02 model predicts that colour is split in to two opposing chromatic components, red-green and cyan-yellow (termed CIECAM02-a and CIECAM02-b respectively), and an achromatic component (termed CIECAM02-A). Entrainment of cortical activity to the outputs of these components was estimated using measurements of electro- and magnetoencephalographic (EMEG) activity, recorded while healthy subjects watched videos of dots changing colour. We find entrainment to chromatic component CIECAM02-a at approximately 35 ms latency bilaterally in occipital lobe regions, and entrainment to achromatic component CIECAM02-A at approximately 75 ms latency, also bilaterally in occipital regions. For comparison, transforms from a less physiologically plausible model (CIELAB) were also tested, with no significant entrainment found. Copyright © 2018 Elsevier Ltd. All rights reserved.
Shivkumar, Sabyasachi; Muralidharan, Vignesh; Chakravarthy, V Srinivasa
2017-01-01
Basal ganglia circuit is an important subcortical system of the brain thought to be responsible for reward-based learning. Striatum, the largest nucleus of the basal ganglia, serves as an input port that maps cortical information. Microanatomical studies show that the striatum is a mosaic of specialized input-output structures called striosomes and regions of the surrounding matrix called the matrisomes. We have developed a computational model of the striatum using layered self-organizing maps to capture the center-surround structure seen experimentally and explain its functional significance. We believe that these structural components could build representations of state and action spaces in different environments. The striatum model is then integrated with other components of basal ganglia, making it capable of solving reinforcement learning tasks. We have proposed a biologically plausible mechanism of action-based learning where the striosome biases the matrisome activity toward a preferred action. Several studies indicate that the striatum is critical in solving context dependent problems. We build on this hypothesis and the proposed model exploits the modularity of the striatum to efficiently solve such tasks.
Shivkumar, Sabyasachi; Muralidharan, Vignesh; Chakravarthy, V. Srinivasa
2017-01-01
Basal ganglia circuit is an important subcortical system of the brain thought to be responsible for reward-based learning. Striatum, the largest nucleus of the basal ganglia, serves as an input port that maps cortical information. Microanatomical studies show that the striatum is a mosaic of specialized input-output structures called striosomes and regions of the surrounding matrix called the matrisomes. We have developed a computational model of the striatum using layered self-organizing maps to capture the center-surround structure seen experimentally and explain its functional significance. We believe that these structural components could build representations of state and action spaces in different environments. The striatum model is then integrated with other components of basal ganglia, making it capable of solving reinforcement learning tasks. We have proposed a biologically plausible mechanism of action-based learning where the striosome biases the matrisome activity toward a preferred action. Several studies indicate that the striatum is critical in solving context dependent problems. We build on this hypothesis and the proposed model exploits the modularity of the striatum to efficiently solve such tasks. PMID:28680395
Delta: a new web-based 3D genome visualization and analysis platform.
Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua
2018-04-15
Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.
Computational analyses in cognitive neuroscience: in defense of biological implausibility.
Dror, I E; Gallogly, D P
1999-06-01
Because cognitive neuroscience researchers attempt to understand the human mind by bridging behavior and brain, they expect computational analyses to be biologically plausible. In this paper, biologically implausible computational analyses are shown to have critical and essential roles in the various stages and domains of cognitive neuroscience research. Specifically, biologically implausible computational analyses can contribute to (1) understanding and characterizing the problem that is being studied, (2) examining the availability of information and its representation, and (3) evaluating and understanding the neuronal solution. In the context of the distinct types of contributions made by certain computational analyses, the biological plausibility of those analyses is altogether irrelevant. These biologically implausible models are nevertheless relevant and important for biologically driven research.
Plausibility Judgments in Conceptual Change and Epistemic Cognition
ERIC Educational Resources Information Center
Lombardi, Doug; Nussbaum, E. Michael; Sinatra, Gale M.
2016-01-01
Plausibility judgments rarely have been addressed empirically in conceptual change research. Recent research, however, suggests that these judgments may be pivotal to conceptual change about certain topics where a gap exists between what scientists and laypersons find plausible. Based on a philosophical and empirical foundation, this article…
Identifying Asteroidal Parent Bodies of the Meteorites: The Last Lap
NASA Technical Reports Server (NTRS)
Gaffey, M. J.
2000-01-01
Spectral studies of asteroids and dynamical models have converged to yield, at last, a clear view of asteroid-meteorite linkages. Plausible parent bodies for most meteorite types have either been identified or it has become evident where to search for them.
The penny pusher: a cellular model of lens growth.
Shi, Yanrong; De Maria, Alicia; Lubura, Snježana; Šikić, Hrvoje; Bassnett, Steven
2014-12-16
The mechanisms that regulate the number of cells in the lens and, therefore, its size and shape are unknown. We examined the dynamic relationship between proliferative behavior in the epithelial layer and macroscopic lens growth. The distribution of S-phase cells across the epithelium was visualized by confocal microscopy and cell populations were determined from orthographic projections of the lens surface. The number of S-phase cells in the mouse lens epithelium fell exponentially, to an asymptotic value of approximately 200 cells by 6 months. Mitosis became increasingly restricted to a 300-μm-wide swath of equatorial epithelium, the germinative zone (GZ), within which two peaks in labeling index were detected. Postnatally, the cell population increased to approximately 50,000 cells at 4 weeks of age. Thereafter, the number of cells declined, despite continued growth in lens dimensions. This apparently paradoxical observation was explained by a time-dependent increase in the surface area of cells at all locations. The cell biological measurements were incorporated into a physical model, the Penny Pusher. In this simple model, cells were considered to be of a single type, the proliferative behavior of which depended solely on latitude. Simulations using the Penny Pusher predicted the emergence of cell clones and were in good agreement with data obtained from earlier lineage-tracing studies. The Penny Pusher, a simple stochastic model, offers a useful conceptual framework for the investigation of lens growth mechanisms and provides a plausible alternative to growth models that postulate the existence of lens stem cells. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos
2017-01-01
Earthquake ground‐motion recordings are scarce in the central and eastern United States (CEUS) for large‐magnitude events and at close distances. We use two different simulation approaches, a deterministic physics‐based method and a site‐based stochastic method, to simulate ground motions over a wide range of magnitudes. Drawing on previous results for the modeling of recordings from the 2011 Mw 5.8 Mineral, Virginia, earthquake and using the 2001 Mw 7.6 Bhuj, India, earthquake as a tectonic analog for a large magnitude CEUS event, we are able to calibrate the two simulation methods over this magnitude range. Both models show a good fit to the Mineral and Bhuj observations from 0.1 to 10 Hz. Model parameters are then adjusted to obtain simulations for Mw 6.5, 7.0, and 7.6 events in the CEUS. Our simulations are compared with the 2014 U.S. Geological Survey weighted combination of existing ground‐motion prediction equations in the CEUS. The physics‐based simulations show comparable response spectral amplitudes and a fairly similar attenuation with distance. The site‐based stochastic simulations suggest a slightly faster attenuation of the response spectral amplitudes with distance for larger magnitude events and, as a result, slightly lower amplitudes at distances greater than 200 km. Both models are plausible alternatives and, given the few available data points in the CEUS, can be used to represent the epistemic uncertainty in modeling of postulated CEUS large‐magnitude events.
Is there a geometric module for spatial orientation? Insights from a rodent navigation model.
Sheynikhovich, Denis; Chavarriaga, Ricardo; Strösslin, Thomas; Arleo, Angelo; Gerstner, Wulfram
2009-07-01
Modern psychological theories of spatial cognition postulate the existence of a geometric module for reorientation. This concept is derived from experimental data showing that in rectangular arenas with distinct landmarks in the corners, disoriented rats often make diagonal errors, suggesting their preference for the geometric (arena shape) over the nongeometric (landmarks) cues. Moreover, sensitivity of hippocampal cell firing to changes in the environment layout was taken in support of the geometric module hypothesis. Using a computational model of rat navigation, the authors proposed and tested the alternative hypothesis that the influence of spatial geometry on both behavioral and neuronal levels can be explained by the properties of visual features that constitute local views of the environment. Their modeling results suggest that the pattern of diagonal errors observed in reorientation tasks can be understood by the analysis of sensory information processing that underlies the navigation strategy employed to solve the task. In particular, 2 navigation strategies were considered: (a) a place-based locale strategy that relies on a model of grid and place cells and (b) a stimulus-response taxon strategy that involves direct association of local views with action choices. The authors showed that the application of the 2 strategies in the reorientation tasks results in different patterns of diagonal errors, consistent with behavioral data. These results argue against the geometric module hypothesis by providing a simpler and biologically more plausible explanation for the related experimental data. Moreover, the same model also describes behavioral results in different types of water-maze tasks. Copyright (c) 2009 APA, all rights reserved.
Estimates of live-tree carbon stores in the Pacific Northwest are sensitive to model selection
Susanna L. Melson; Mark E. Harmon; Jeremy S. Fried; James B. Domingo
2011-01-01
Estimates of live-tree carbon stores are influenced by numerous uncertainties. One of them is model-selection uncertainty: one has to choose among multiple empirical equations and conversion factors that can be plausibly justified as locally applicable to calculate the carbon store from inventory measurements such as tree height and diameter at breast height (DBH)....
Dynamical simulation priors for human motion tracking.
Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke
2013-01-01
We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.
Kentzoglanakis, Kyriakos; Poole, Matthew
2012-01-01
In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.
Plausible carrier transport model in organic-inorganic hybrid perovskite resistive memory devices
NASA Astrophysics Data System (ADS)
Park, Nayoung; Kwon, Yongwoo; Choi, Jaeho; Jang, Ho Won; Cha, Pil-Ryung
2018-04-01
We demonstrate thermally assisted hopping (TAH) as an appropriate carrier transport model for CH3NH3PbI3 resistive memories. Organic semiconductors, including organic-inorganic hybrid perovskites, have been previously speculated to follow the space-charge-limited conduction (SCLC) model. However, the SCLC model cannot reproduce the temperature dependence of experimental current-voltage curves. Instead, the TAH model with temperature-dependent trap densities and a constant trap level are demonstrated to well reproduce the experimental results.
Computational modeling of peripheral pain: a commentary.
Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S
2015-06-11
This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.
Heterogeneous dermatitis complaints after change in drinking water treatment: a case report
Weintraub, June M; Berger, Magdalena; Bhatia, Rajiv
2006-01-01
Background The disinfectant monochloramine minimizes the formation of potentially hazardous and regulated byproducts, and many drinking water utilities are shifting to its use. Case presentation After a drinking water utility serving 2.4 million people switched to monochloramine for residual disinfection, a small number of residents complained of dermatitis reactions. We interviewed 17 people about their symptoms. Skin appearance, symptoms, and exposures were heterogeneous. Five respondents had history of hives or rash that preceded the switch to monochloramine. Conclusion The complaints described were heterogeneous, and many of the respondents had underlying or preexisting conditions that would offer plausible alternative explanations for their symptoms. We did not recommend further study of these complaints. PMID:16764728
Using scenario analysis to determine managed care strategy.
Krentz, S E; Gish, R S
2000-09-01
In today's volatile healthcare environment, traditional planning tools are inadequate to guide financial managers of provider organizations in developing managed care strategies. These tools often disregard the uncertainty surrounding market forces such as employee benefit structure, the future of Medicare managed care, and the impact of consumer behavior. Scenario analysis overcomes this limitation by acknowledging the uncertain healthcare environment and articulating a set of plausible alternative futures, thus supplying financial executives with the perspective to craft strategies that can improve the market position of their organizations. By being alert for trigger points that might signal the rise of a specific scenario, financial managers can increase their preparedness for changes in market forces.
Source Effects and Plausibility Judgments When Reading about Climate Change
ERIC Educational Resources Information Center
Lombardi, Doug; Seyranian, Viviane; Sinatra, Gale M.
2014-01-01
Gaps between what scientists and laypeople find plausible may act as a barrier to learning complex and/or controversial socioscientific concepts. For example, individuals may consider scientific explanations that human activities are causing current climate change as implausible. This plausibility judgment may be due-in part-to individuals'…
Plausibility and Perspective Influence the Processing of Counterfactual Narratives
ERIC Educational Resources Information Center
Ferguson, Heather J.; Jayes, Lewis T.
2018-01-01
Previous research has established that readers' eye movements are sensitive to the difficulty with which a word is processed. One important factor that influences processing is the fit of a word within the wider context, including its plausibility. Here we explore the influence of plausibility in counterfactual language processing. Counterfactuals…
NASA Astrophysics Data System (ADS)
Kurosawa, Kosuke; Okamoto, Takaya; Genda, Hidenori
2018-02-01
Hypervelocity ejection of material by impact spallation is considered a plausible mechanism for material exchange between two planetary bodies. We have modeled the spallation process during vertical impacts over a range of impact velocities from 6 to 21 km/s using both grid- and particle-based hydrocode models. The Tillotson equations of state, which are able to treat the nonlinear dependence of density on pressure and thermal pressure in strongly shocked matter, were used to study the hydrodynamic-thermodynamic response after impacts. The effects of material strength and gravitational acceleration were not considered. A two-dimensional time-dependent pressure field within a 1.5-fold projectile radius from the impact point was investigated in cylindrical coordinates to address the generation of spalled material. A resolution test was also performed to reject ejected materials with peak pressures that were too low due to artificial viscosity. The relationship between ejection velocity veject and peak pressure Ppeak was also derived. Our approach shows that "late-stage acceleration" in an ejecta curtain occurs due to the compressible nature of the ejecta, resulting in an ejection velocity that can be higher than the ideal maximum of the resultant particle velocity after passage of a shock wave. We also calculate the ejecta mass that can escape from a planet like Mars (i.e., veject > 5 km/s) that matches the petrographic constraints from Martian meteorites, and which occurs when Ppeak = 30-50 GPa. Although the mass of such ejecta is limited to 0.1-1 wt% of the projectile mass in vertical impacts, this is sufficient for spallation to have been a plausible mechanism for the ejection of Martian meteorites. Finally, we propose that impact spallation is a plausible mechanism for the generation of tektites.
Empirical agreement in model validation.
Jebeile, Julie; Barberousse, Anouk
2016-04-01
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kelly, E. D.; Atakturk, K. R.; Catlos, E. J.; Lizzadro-McPherson, D. J.; Cemen, I.; Lovera, O. M.
2015-12-01
Pressure-temperature (P-T) paths derived from garnet chemical zoning and supported by thermal modeling record alternating burial and exhumation during Main Menderes Metamorphism in western Turkey. We studied six rocks along the Selimiye (Kayabükü) shear zone, three from the footwall (Çine nappe) and three from the hanging wall (Selimiye nappe). The shear zone bounds the southern Menderes Massif metamorphic core complex and has been suggested to record compression followed by extension. The rocks are lower-amphibolite facies garnet-bearing metapelites with nearly identical mineral suites. Retrograde overprinting hinders classical thermobarometry; to overcome this, preserved chemical zoning in garnet combined with a G-minimization approach was used to construct detailed P-T paths (e.g., 50 points in some paths). During continuous temperature increase, the Çine nappe paths show increasing, decreasing, and then increasing pressure (an N-shaped path) ending at 7-8 kbar and ~565-590 °C. The Selimiye nappe paths show a single increase in P-T ending at ~7.3 kbar and ~580 °C. Similar bulk-rock compositions in all samples and the separation by the shear zone suggest that garnets grew during distinct events in each nappe. The timing of garnet growth, and thus the P-T paths, is currently undetermined, as monazite inclusions in garnet appear secondary and complicated by excess common Pb. The Çine nappe N-shaped path describes alternations in burial and exhumation, possibly due to thrust motion along the shear zone. To demonstrate the physical plausibility of the P-T paths, a 2-D finite difference solution to the diffusion-advection equation was applied. The results of the thermal modeling suggest that thrusting, denudation, and renewed thrusting would produce similar changes in P-T to the N-shaped path. Thus, the Çine nappe N-shaped P-T path appears to record a gap in thrust motion along the Selimiye (Kayabükü) shear zone prior to ultimate unroofing of the massif.
Subsurface Scenarios: What are We Trying to Model?
In collaboration with the Lawrence Berkeley National Lab (George Moridis and team),and after a thorough review of the scientific literature and data and interviews with a selection of experts on the topic, a finite number of plausible scenarios were selected for more quantitative...
Embodied Design: Constructing Means for Constructing Meaning
ERIC Educational Resources Information Center
Abrahamson, Dor
2009-01-01
Design-based research studies are conducted as iterative implementation-analysis-modification cycles, in which emerging theoretical models and pedagogically plausible activities are reciprocally tuned toward each other as a means of investigating conjectures pertaining to mechanisms underlying content teaching and learning. Yet this approach, even…
NASA Astrophysics Data System (ADS)
Lee, Benjamin Seiyon; Haran, Murali; Keller, Klaus
2017-10-01
Storm surges are key drivers of coastal flooding, which generate considerable risks. Strategies to manage these risks can hinge on the ability to (i) project the return periods of extreme storm surges and (ii) detect potential changes in their statistical properties. There are several lines of evidence linking rising global average temperatures and increasingly frequent extreme storm surges. This conclusion is, however, subject to considerable structural uncertainty. This leads to two main questions: What are projections under various plausible statistical models? How long would it take to distinguish among these plausible statistical models? We address these questions by analyzing observed and simulated storm surge data. We find that (1) there is a positive correlation between global mean temperature rise and increasing frequencies of extreme storm surges; (2) there is considerable uncertainty underlying the strength of this relationship; and (3) if the frequency of storm surges is increasing, this increase can be detected within a multidecadal timescale (≈20 years from now).
Semantic and Plausibility Preview Benefit Effects in English: Evidence from Eye Movements
Schotter, Elizabeth R.; Jia, Annie
2016-01-01
Theories of preview benefit in reading hinge on integration across saccades and the idea that preview benefit is greater the more similar the preview and target are. Schotter (2013) reported preview benefit from a synonymous preview, but it is unclear whether this effect occurs because of similarity between the preview and target (integration), or because of contextual fit of the preview—synonyms satisfy both accounts. Studies in Chinese have found evidence for preview benefit for words that are unrelated to the target, but are contextually plausible (Yang, Li, Wang, Slattery, & Rayner, 2014; Yang, Wang, Tong, & Rayner, 2012), which is incompatible with an integration account but supports a contextual fit account. Here, we used plausible and implausible unrelated previews in addition to plausible synonym, antonym, and identical previews to further investigate these accounts for readers of English. Early reading measures were shorter for all plausible preview conditions compared to the implausible preview condition. In later reading measures, a benefit for the plausible unrelated preview condition was not observed. In a second experiment, we asked questions that probed whether the reader encoded the preview or target. Readers were more likely to report the preview when they had skipped the word and not regressed to it, and when the preview was plausible. Thus, under certain circumstances, the preview word is processed to a high level of representation (i.e., semantic plausibility) regardless of its relationship to the target, but its influence on reading is relatively short-lived, being replaced by the target word, when fixated. PMID:27123754
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial
2015-08-01
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
ERIC Educational Resources Information Center
Gauld, Colin
1998-01-01
Reports that many students do not believe Newton's law of action and reaction and suggests ways in which its plausibility might be enhanced. Reviews how this law has been made more plausible over time by Newton and those who succeeded him. Contains 25 references. (DDR)
Plausibility Reappraisals and Shifts in Middle School Students' Climate Change Conceptions
ERIC Educational Resources Information Center
Lombardi, Doug; Sinatra, Gale M.; Nussbaum, E. Michael
2013-01-01
Plausibility is a central but under-examined topic in conceptual change research. Climate change is an important socio-scientific topic; however, many view human-induced climate change as implausible. When learning about climate change, students need to make plausibility judgments but they may not be sufficiently critical or reflective. The…
Using an agent-based model to simulate children’s active travel to school
2013-01-01
Background Despite the multiple advantages of active travel to school, only a small percentage of US children and adolescents walk or bicycle to school. Intervention studies are in a relatively early stage and evidence of their effectiveness over long periods is limited. The purpose of this study was to illustrate the utility of agent-based models in exploring how various policies may influence children’s active travel to school. Methods An agent-based model was developed to simulate children’s school travel behavior within a hypothetical city. The model was used to explore the plausible implications of policies targeting two established barriers to active school travel: long distance to school and traffic safety. The percent of children who walk to school was compared for various scenarios. Results To maximize the percent of children who walk to school the school locations should be evenly distributed over space and children should be assigned to the closest school. In the case of interventions to improve traffic safety, targeting a smaller area around the school with greater intensity may be more effective than targeting a larger area with less intensity. Conclusions Despite the challenges they present, agent based models are a useful complement to other analytical strategies in studying the plausible impact of various policies on active travel to school. PMID:23705953
Using an agent-based model to simulate children's active travel to school.
Yang, Yong; Diez-Roux, Ana V
2013-05-26
Despite the multiple advantages of active travel to school, only a small percentage of US children and adolescents walk or bicycle to school. Intervention studies are in a relatively early stage and evidence of their effectiveness over long periods is limited. The purpose of this study was to illustrate the utility of agent-based models in exploring how various policies may influence children's active travel to school. An agent-based model was developed to simulate children's school travel behavior within a hypothetical city. The model was used to explore the plausible implications of policies targeting two established barriers to active school travel: long distance to school and traffic safety. The percent of children who walk to school was compared for various scenarios. To maximize the percent of children who walk to school the school locations should be evenly distributed over space and children should be assigned to the closest school. In the case of interventions to improve traffic safety, targeting a smaller area around the school with greater intensity may be more effective than targeting a larger area with less intensity. Despite the challenges they present, agent based models are a useful complement to other analytical strategies in studying the plausible impact of various policies on active travel to school.
Huxham, Mark; Emerton, Lucy; Kairo, James; Munyi, Fridah; Abdirizak, Hassan; Muriuki, Tabitha; Nunan, Fiona; Briers, Robert A
2015-07-01
Mangrove forests are under global pressure. Habitat destruction and degradation persist despite longstanding recognition of the important ecological functions of mangroves. Hence new approaches are needed to help stakeholders and policy-makers achieve sound management that is informed by the best science. Here we explore how the new policy concept of Climate Compatible Development (CCD) can be applied to achieve better outcomes. We use economic valuation approaches to combine socio-economic data, projections of forest cover based on quantitative risk mapping and storyline scenario building exercises to articulate the economic consequences of plausible alternative future scenarios for the mangrove forests of the South Kenya coast, as a case study of relevance to many other areas. Using data from 645 household surveys, 10 focus groups and 74 interviews conducted across four mangrove sites, and combining these with information on fish catches taken at three landing sites, a mangrove carbon trading project and published data allowed us to make a thorough (although still partial) economic valuation of the forests. This gave a current value of the South Coast mangroves of USD 6.5 million, or USD 1166 ha(-1), with 59% of this value on average derived from regulating services. Quantitative risk mapping, projecting recent trends over the next twenty years, suggests a 43% loss of forest cover over that time with 100% loss at the most vulnerable sites. Much of the forest lost between 1992 and 2012 has not been replaced by high value alternative land uses hence restoration of these areas is feasible and may not involve large opportunity costs. We invited thirty eight stakeholders to develop plausible storyline scenarios reflecting Business as Usual (BAU) and CCD - which emphasises sustainable forest conservation and management - in twenty years time, drawing on local and regional expert knowledge of relevant policy, social trends and cultures. Combining these scenarios with the quantitative projections and economic baseline allowed the modelling of likely value added and costs avoided under the CCD scenario. This suggests a net present value of more than US$20 million of adoption of CCD rather than BAU. This work adds to the economic evidence for mangrove conservation and helps to underline the importance of new real and emerging markets, such as for REDD + projects, in making this case for carbon-rich coastal habitats. It demonstrates a policy tool - CCD - that can be used to engage stakeholders and help to co-ordinate policy across different sectors towards mangrove conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.
ARCHITECT: The architecture-based technology evaluation and capability tradeoff method
NASA Astrophysics Data System (ADS)
Griendling, Kelly A.
The use of architectures for the design, development, and documentation of system-of-systems engineering has become a common practice in recent years. This practice became mandatory in the defense industry in 2004 when the Department of Defense Architecture Framework (DoDAF) Promulgation Memo mandated that all Department of Defense (DoD) architectures must be DoDAF compliant. Despite this mandate, there has been significant confusion and a lack of consistency in the creation and the use of the architecture products. Products are typically created as static documents used for communication and documentation purposes that are difficult to change and do not support engineering design activities and acquisition decision making. At the same time, acquisition guidance has been recently reformed to move from the bottom-up approach of the Requirements Generation System (RGS) to the top-down approach mandated by the Joint Capabilities Integration and Devel- opment System (JCIDS), which requires the use of DoDAF to support acquisition. Defense agencies have had difficulty adjusting to this new policy, and are struggling to determine how to meet new acquisition requirements. This research has developed the Architecture-based Technology Evaluation and Capability Tradeoff (ARCHITECT) Methodology to respond to these challenges and address concerns raised about the defense acquisition process, particularly the time required to implement parts of the process, the need to evaluate solutions across capability and mission areas, and the need to use a rigorous, traceable, repeatable method that utilizes modeling and simulation to better substantiate early-phase acquisition decisions. The objective is to create a capability-based systems engineering methodology for the early phases of design and acquisition (specifically Pre-Milestone A activities) which improves agility in defense acquisition by (1) streamlining the development of key elements of JCIDS and DoDAF, (2) moving the creation of DoDAF products forward in the defense acquisition process, and (3) using DoDAF products for more than documentation by integrating them into the problem definition and analysis of alternatives phases and applying executable architecting. This research proposes and demonstrates the plausibility of a prescriptive methodology for developing executable DoDAF products which will explicitly support decision-making in the early phases of JCIDS. A set of criteria by which CBAs should be judged is proposed, and the methodology is developed with these criteria in mind. The methodology integrates existing tools and techniques for systems engineering and system of systems engineering with several new modeling and simulation tools and techniques developed as part of this research to fill gaps noted in prior CBAs. A suppression of enemy air defenses (SEAD) mission is used to demonstrate the ap- plication of ARCHITECT and to show the plausibility of the approach. For the SEAD study, metrics are derived and a gap analysis is performed. The study then identifies and quantitatively compares system and operational architecture alternatives for performing SEAD. A series of down-selections is performed to identify promising architectures, and these promising solutions are subject to further analysis where the impacts of force structure and network structure are examined. While the numerical results of the SEAD study are notional and could not be applied to an actual SEAD CBA, the example served to highlight many of the salient features of the methodology. The SEAD study presented enabled pre-Milestone A tradeoffs to be performed quantitatively across a large number of architectural alternatives in a traceable and repeatable manner. The alternatives considered included variations on operations, systems, organizational responsibilities (through the assignment of systems to tasks), network (or collaboration) structure, interoperability level, and force structure. All of the information used in the study is preserved in the environment, which is dynamic and allows for on-the-fly analysis. The assumptions used were consistent, which was assured through the use of single file documenting all inputs, which was shared across all models. Furthermore, a model was made of the ARCHITECT methodology itself, and was used to demonstrate that even if the steps took twice as long to perform as they did in the case of the SEAD example, the methodology still provides the ability to conduct CBA analyses in less time than prior CBAs to date. Overall, it is shown that the ARCHITECT methodology results in an improvement over current CBAs in the criteria developed here.
NASA Astrophysics Data System (ADS)
Plumlee, G. S.; Morman, S. A.; Alpers, C. N.; Hoefen, T. M.; Meeker, G. P.
2010-12-01
Disasters commonly pose immediate threats to human safety, but can also produce hazardous materials (HM) that pose short- and long-term environmental-health threats. The U.S. Geological Survey (USGS) has helped assess potential environmental health characteristics of HM produced by various natural and anthropogenic disasters, such as the 2001 World Trade Center collapse, 2005 hurricanes Katrina and Rita, 2007-2009 southern California wildfires, various volcanic eruptions, and others. Building upon experience gained from these responses, we are now developing methods to anticipate plausible environmental and health implications of the 2008 Great Southern California ShakeOut scenario (which modeled the impacts of a 7.8 magnitude earthquake on the southern San Andreas fault, http://urbanearth.gps.caltech.edu/scenario08/), and the recent ARkStorm scenario (modeling the impacts of a major, weeks-long winter storm hitting nearly all of California, http://urbanearth.gps.caltech.edu/winter-storm/). Environmental-health impacts of various past earthquakes and extreme storms are first used to identify plausible impacts that could be associated with the disaster scenarios. Substantial insights can then be gleaned using a Geographic Information Systems (GIS) approach to link ShakeOut and ARkStorm effects maps with data extracted from diverse database sources containing geologic, hazards, and environmental information. This type of analysis helps constrain where potential geogenic (natural) and anthropogenic sources of HM (and their likely types of contaminants or pathogens) fall within areas of predicted ShakeOut-related shaking, firestorms, and landslides, and predicted ARkStorm-related precipitation, flooding, and winds. Because of uncertainties in the event models and many uncertainties in the databases used (e.g., incorrect location information, lack of detailed information on specific facilities, etc.) this approach should only be considered as the first of multiple steps toward a more quantitative, predictive approach to understanding the potential sources, types, environmental behavior, and health implications of HM predicted to result from these disaster scenarios. Although only a first step, this qualitative approach will help enhance planning for, mitigation of, and resilience to environmental-health consequences of future disasters. This qualitative approach also requires careful communication to stakeholders that does not sensationalize or overstate potential problems, but rather conveys plausible impacts and next steps to improve understanding of potential risks and their mitigation.
ERIC Educational Resources Information Center
Maxwell, Jane Carlisle; Pullum, Thomas W.
2001-01-01
Applied the capture-recapture model, through a Poisson regression to a time series of data for admissions to treatment from 1987 to 1996 to estimate the number of heroin addicts in Texas who are "at-risk" for treatment. The entire data set produced estimates that were lower and more plausible than those produced by drawing samples,…
A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE
A Multimodel Approach for Calculating Benchmark Dose
Ramon I. Garcia and R. Woodrow Setzer
In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...
Lv, Yun-Yun; He, Kai; Klaus, Sebastian; Brown, Rafe M; Li, Jia-Tang
2018-04-01
Currently, the genus Kurixalus comprises 14 species distributed in Southern, Southeast and East Asia. Because of their relatively low dispersal capability and intolerance of seawater, this group is ideal for the study of terrestrial range evolution, especially that portion of its range that extends into the island archipelagos of Southern Asia. We assembled a large dataset of mitochondrial and nuclear genes, and estimated phylogeny by maximum likelihood and Bayesian methods, and we explored the history of each species via divergence-time estimation based on fossil-calibrations. A variety of ancestral-area reconstruction strategies were employed to estimate past changes of the species' geographical range, and to evaluate the impact of different abiotic barriers on range evolution. We found that frilled swamp treefrogs probably originated in Taiwan or South Vietnam in the Oligocene. Alternatively, the lineage leading to Kurixalus appendiculatus strongly supports a hypothesis of terrestrial connection between the Indian and Asian continents in the Oligocene. The outcome of both our divergence-time estimates and ancestral-area reconstruction suggests that the divergence between species from Indochina and Taiwan can probably be attributed to the opening of the South China Sea, approximately 33 million years ago. We could not find evidence for dispersal between mainland China and Taiwan Island. Formation of both Mekong and Red River valleys did not have any impact on Kurixalus species diversification. However, coincidence in timing of climate change and availability of plausible dispersal routes from the Oligocene to the middle Miocene, plausibly implied that Kurixalus diversification in Asia resulted from contemporaneous, climate-induced environmental upheaval (Late Oligocene Warming at 29 Ma; Mi-1 glaciation since 24.4-21.5 Ma; Mid-Miocene Climatic Optimum at 14 Ma), which alternatively opened and closed dispersal routes. Copyright © 2017 Elsevier Inc. All rights reserved.
Investigation of the interaction between the atypical agonist c[YpwFG] and MOR.
Gentilucci, Luca; Squassabia, Federico; De Marco, Rossella; Artali, Roberto; Cardillo, Giuliana; Tolomelli, Alessandra; Spampinato, Santi; Bedini, Andrea
2008-05-01
Endogenous and exogenous opiates are currently considered the drugs of choice for treating different kinds of pain. However, their prolonged use produces several adverse symptoms, and in addition, many forms of pain are resistant to any kind of therapy. Therefore, the discovery of compounds active towards mu-opioid receptors (MORs) by alternative pharmacological mechanisms could be of value for developing novel classes of analgesics. There is evidence that some unusual molecules can bind opioid receptors, albeit lacking some of the typical opioid pharmacophoric features. In particular, the recent discovery of a few compounds that showed agonist behavior even in the absence of the primary pharmacophore, namely a protonable amine, led to a rediscussion of the importance of ionic interactions in stabilizing the ligand-receptor complex and in activating signal transduction. Very recently, we synthesized a library of cyclic analogs of the endogenous, MOR-selective agonist endomorphin-1 (YPWF-NH(2)), containing a Gly5 bridge between Tyr1 and Phe4. The cyclopeptide c[YpwFG] showed good affinity and agonist behavior. This atypical MOR agonist does not have the protonable Tyr amine. In order to gain more information about plausible mechanisms of interaction between c[YpwFG] and the opioid receptor, we synthesized a selected set of derivatives containing different bridges between Tyr1 and Phe4, and tested their affinities towards mu-opioid receptors. We performed conformational analysis of the cyclopeptides by NMR spectroscopy and molecular dynamics, and investigated plausible, unprecedented modes of interaction with the MOR by molecular docking. The successive quantum mechanics/molecular mechanics investigation of the complexes obtained by the molecular docking procedure furnished a more detailed description of the binding mode and the electronic properties of the ligands. The comparison with the binding mode of the potent agonist JOM-6 seems to indicate that the cyclic endomorphin-1 analogs interact with the receptor by way of an alternative mechanism, still maintaining the ability to activate the receptor.
Rautenberg, Tamlyn Anne; Zerwes, Ute; Lee, Way Seah
2018-01-01
Objective To perform cost utility (CU) and budget impact (BI) analyses augmented by scenario analyses of critical model structure components to evaluate racecadotril as adjuvant to oral rehydration solution (ORS) for children under 5 years with acute diarrhea in Malaysia. Methods A CU model was adapted to evaluate racecadotril plus ORS vs ORS alone for acute diarrhea in children younger than 5 years from a Malaysian public payer’s perspective. A bespoke BI analysis was undertaken in addition to detailed scenario analyses with respect to critical model structure components. Results According to the CU model, the intervention is less costly and more effective than comparator for the base case with a dominant incremental cost-effectiveness ratio of −RM 1,272,833/quality-adjusted life year (USD −312,726/quality-adjusted life year) in favor of the intervention. According to the BI analysis (assuming an increase of 5% market share per year for racecadotril+ORS for 5 years), the total cumulative incremental percentage reduction in health care expenditure for diarrhea in children is 0.136578%, resulting in a total potential cumulative cost savings of −RM 73,193,603 (USD −17,983,595) over a 5-year period. Results hold true across a range of plausible scenarios focused on critical model components. Conclusion Adjuvant racecadotril vs ORS alone is potentially cost-effective from a Malaysian public payer perspective subject to the assumptions and limitations of the model. BI analysis shows that this translates into potential cost savings for the Malaysian public health care system. Results hold true at evidence-based base case values and over a range of alternate scenarios. PMID:29588606
NASA Astrophysics Data System (ADS)
Meng, Yan-Zhi; Geng, Jin-Jun; Zhang, Bin-Bin; Wei, Jun-Jie; Xiao, Di; Liu, Liang-Duan; Gao, He; Wu, Xue-Feng; Liang, En-Wei; Huang, Yong-Feng; Dai, Zi-Gao; Zhang, Bing
2018-06-01
The first gravitational-wave event from the merger of a binary neutron star system (GW170817) was detected recently. The associated short gamma-ray burst (GRB 170817A) has a low isotropic luminosity (∼1047 erg s‑1) and a peak energy E p ∼ 145 keV during the initial main emission between ‑0.3 and 0.4 s. The origin of this short GRB is still under debate, but a plausible interpretation is that it is due to the off-axis emission from a structured jet. We consider two possibilities. First, since the best-fit spectral model for the main pulse of GRB 170817A is a cutoff power law with a hard low-energy photon index (α =-{0.62}-0.54+0.49), we consider an off-axis photosphere model. We develop a theory of photosphere emission in a structured jet and find that such a model can reproduce a low-energy photon index that is softer than a blackbody through enhancing high-latitude emission. The model can naturally account for the observed spectrum. The best-fit Lorentz factor along the line of sight is ∼20, which demands that there is a significant delay between the merger and jet launching. Alternatively, we consider that the emission is produced via synchrotron radiation in an optically thin region in an expanding jet with decreasing magnetic fields. This model does not require a delay of jet launching but demands a larger bulk Lorentz factor along the line of sight. We perform Markov Chain Monte Carlo fitting to the data within the framework of both models and obtain good fitting results in both cases.
Mannocci, Laura; Roberts, Jason J; Miller, David L; Halpin, Patrick N
2017-06-01
As human activities expand beyond national jurisdictions to the high seas, there is an increasing need to consider anthropogenic impacts to species inhabiting these waters. The current scarcity of scientific observations of cetaceans in the high seas impedes the assessment of population-level impacts of these activities. We developed plausible density estimates to facilitate a quantitative assessment of anthropogenic impacts on cetacean populations in these waters. Our study region extended from a well-surveyed region within the U.S. Exclusive Economic Zone into a large region of the western North Atlantic sparsely surveyed for cetaceans. We modeled densities of 15 cetacean taxa with available line transect survey data and habitat covariates and extrapolated predictions to sparsely surveyed regions. We formulated models to reduce the extent of extrapolation beyond covariate ranges, and constrained them to model simple and generalizable relationships. To evaluate confidence in the predictions, we mapped where predictions were made outside sampled covariate ranges, examined alternate models, and compared predicted densities with maps of sightings from sources that could not be integrated into our models. Confidence levels in model results depended on the taxon and geographic area and highlighted the need for additional surveying in environmentally distinct areas. With application of necessary caution, our density estimates can inform management needs in the high seas, such as the quantification of potential cetacean interactions with military training exercises, shipping, fisheries, and deep-sea mining and be used to delineate areas of special biological significance in international waters. Our approach is generally applicable to other marine taxa and geographic regions for which management will be implemented but data are sparse. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Antimicrobial peptides for the treatment of pulmonary tuberculosis, allies or foes?
Rivas-Santiago, Bruno; Torres-Juarez, Flor
2018-03-27
Tuberculosis is an ancient disease that has become a serious public health issue in recent years, although increasing incidence has been controlled, deaths caused by Mycobacterium tuberculosis have been accentuated due to the emerging of multi-drug resistant strains and the comorbidity with diabetes mellitus and HIV. This situation is threatening the goals of world health organization (WHO) to eradicate tuberculosis in 2035. WHO has called for the creation of new drugs as an alternative for the treatment of pulmonary tuberculosis, among the plausible molecules that can be used are the antimicrobial peptides (AMPs). These peptides have demonstrated remarkable efficacy to kill mycobacteria in vitro and in vivo in experimental models, nevertheless, these peptides not only have antimicrobial activity but also have a wide variety of functions such as angiogenesis, wound healing, immunomodulation and other well-described roles into the human physiology. Therapeutic strategies for tuberculosis using AMPs must be well thought prior to their clinical use; evaluating comorbidities, family history and risk factors to other diseases, since the wide function of AMPs, they could lead to collateral undesirable effects. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Shipman, J. S.; Anderson, J. W.
2017-12-01
An ideal tool for ecologists and land managers to investigate the impacts of both projected environmental changes and policy alternatives is the creation of immersive, interactive, virtual landscapes. As a new frontier in visualizing and understanding geospatial data, virtual landscapes require a new toolbox for data visualization that includes traditional GIS tools and uncommon tools such as the Unity3d game engine. Game engines provide capabilities to not only explore data but to build and interact with dynamic models collaboratively. These virtual worlds can be used to display and illustrate data that is often more understandable and plausible to both stakeholders and policy makers than is achieved using traditional maps.Within this context we will present funded research that has been developed utilizing virtual landscapes for geographic visualization and decision support among varied stakeholders. We will highlight the challenges and lessons learned when developing interactive virtual environments that require large multidisciplinary team efforts with varied competences. The results will emphasize the importance of visualization and interactive virtual environments and the link with emerging research disciplines within Visual Analytics.
Hagstrum, Jonathan T; Manley, Geoffrey A
2015-10-01
Experienced homing pigeons with extirpated cochleae and lagenae were released from six sites in upstate New York and western Pennsylvania on 17 days between 1973 and 1975 by William T. Keeton and his co-workers at Cornell University. The previously unpublished data indicate that departure directions of the operated birds were significantly different from those of sham-operated control birds (314 total), indicating that aural cues play an important part in the pigeon's navigational system. Moreover, propagation modeling of infrasonic waves using meteorological data for the release days supports the possibility that control birds used infrasonic signals to determine their homeward direction. Local acoustic 'shadow' zones, therefore, could have caused initial disorientation of control birds at release sites where they were normally well oriented. Experimental birds plausibly employed an alternate 'route-reversal' strategy to return home perhaps using their ocular-based magnetic compass. We suggest, based on Keeton's results from another site of long-term disorientation, that experienced pigeons depend predominantly on infrasonic cues for initial orientation, and that surgical removal of their aural sense compelled them to switch to a secondary navigational strategy.
Social discounting involves modulation of neural value signals by temporoparietal junction
Strombach, Tina; Weber, Bernd; Hangebrauk, Zsofia; Kenning, Peter; Karipidis, Iliana I.; Tobler, Philippe N.; Kalenscher, Tobias
2015-01-01
Most people are generous, but not toward everyone alike: generosity usually declines with social distance between individuals, a phenomenon called social discounting. Despite the pervasiveness of social discounting, social distance between actors has been surprisingly neglected in economic theory and neuroscientific research. We used functional magnetic resonance imaging (fMRI) to study the neural basis of this process to understand the neural underpinnings of social decision making. Participants chose between selfish and generous alternatives, yielding either a large reward for the participant alone, or smaller rewards for the participant and another individual at a particular social distance. We found that generous choices engaged the temporoparietal junction (TPJ). In particular, the TPJ activity was scaled to the social-distance–dependent conflict between selfish and generous motives during prosocial choice, consistent with ideas that the TPJ promotes generosity by facilitating overcoming egoism bias. Based on functional coupling data, we propose and provide evidence for a biologically plausible neural model according to which the TPJ supports social discounting by modulating basic neural value signals in the ventromedial prefrontal cortex to incorporate social-distance–dependent other-regarding preferences into an otherwise exclusively own-reward value representation. PMID:25605887
Sensitivity of projected long-term CO2 emissions across the Shared Socioeconomic Pathways
NASA Astrophysics Data System (ADS)
Marangoni, G.; Tavoni, M.; Bosetti, V.; Borgonovo, E.; Capros, P.; Fricko, O.; Gernaat, D. E. H. J.; Guivarch, C.; Havlik, P.; Huppmann, D.; Johnson, N.; Karkatsoulis, P.; Keppo, I.; Krey, V.; Ó Broin, E.; Price, J.; van Vuuren, D. P.
2017-01-01
Scenarios showing future greenhouse gas emissions are needed to estimate climate impacts and the mitigation efforts required for climate stabilization. Recently, the Shared Socioeconomic Pathways (SSPs) have been introduced to describe alternative social, economic and technical narratives, spanning a wide range of plausible futures in terms of challenges to mitigation and adaptation. Thus far the key drivers of the uncertainty in emissions projections have not been robustly disentangled. Here we assess the sensitivities of future CO2 emissions to key drivers characterizing the SSPs. We use six state-of-the-art integrated assessment models with different structural characteristics, and study the impact of five families of parameters, related to population, income, energy efficiency, fossil fuel availability, and low-carbon energy technology development. A recently developed sensitivity analysis algorithm allows us to parsimoniously compute both the direct and interaction effects of each of these drivers on cumulative emissions. The study reveals that the SSP assumptions about energy intensity and economic growth are the most important determinants of future CO2 emissions from energy combustion, both with and without a climate policy. Interaction terms between parameters are shown to be important determinants of the total sensitivities.
Analysis on the Fracture of Al-Cu Dissimilar Materials Friction Stir Welding Lap Joint
NASA Astrophysics Data System (ADS)
Sun, Hongyu; Zhou, Qi; Zhu, Jun; Peng, Yong
2017-12-01
Friction stir welding (FWS) is regarded as a more plausible alternative to other welding methods for Al-Cu dissimilar joining. However, the structure of an FSW joint is different from others. In this study, lap joints of 6061 aluminum alloy and commercially pure copper were produced by FSW, and the effects of rotation rate on macromorphology, microstructure and mechanical properties were investigated. In addition, a fracture J integral model was used to analyze the effect of microstructure on the mechanical properties. The results revealed that the macrodefect-free joints were obtained at a feed rate of 150 mm/min and 1100 rpm and that the failure load of the joint reached as high as 4.57 kN and only reached 2.91 kN for the 900 rpm, where tunnel defects were identified. Particle-rich zones composed of Cu particles dispersed in an Al matrix, and "Flow tracks" were observed by the EDS. The J integral results showed that the microdefects on the advancing side cause serious stress concentration compared with the microdefects located on the Al-Cu interface, resulting in the fracture of the joints.
Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul
2009-01-01
Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.
NASA Astrophysics Data System (ADS)
Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio
2005-10-01
This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.
Teixeira, Andreia Sofia; Monteiro, Pedro T; Carriço, João A; Ramirez, Mário; Francisco, Alexandre P
2015-01-01
Trees, including minimum spanning trees (MSTs), are commonly used in phylogenetic studies. But, for the research community, it may be unclear that the presented tree is just a hypothesis, chosen from among many possible alternatives. In this scenario, it is important to quantify our confidence in both the trees and the branches/edges included in such trees. In this paper, we address this problem for MSTs by introducing a new edge betweenness metric for undirected and weighted graphs. This spanning edge betweenness metric is defined as the fraction of equivalent MSTs where a given edge is present. The metric provides a per edge statistic that is similar to that of the bootstrap approach frequently used in phylogenetics to support the grouping of taxa. We provide methods for the exact computation of this metric based on the well known Kirchhoff's matrix tree theorem. Moreover, we implement and make available a module for the PHYLOViZ software and evaluate the proposed metric concerning both effectiveness and computational performance. Analysis of trees generated using multilocus sequence typing data (MLST) and the goeBURST algorithm revealed that the space of possible MSTs in real data sets is extremely large. Selection of the edge to be represented using bootstrap could lead to unreliable results since alternative edges are present in the same fraction of equivalent MSTs. The choice of the MST to be presented, results from criteria implemented in the algorithm that must be based in biologically plausible models.
Bottle, Alex; Ventura, Chiara Maria; Dharmarajan, Kumar; Aylin, Paul; Ieva, Francesca; Paganoni, Anna Maria
2018-06-01
Heart failure (HF) is a common, serious chronic condition with high morbidity, hospitalisation and mortality. The healthcare systems of England and the northern Italian region of Lombardy share important similarities and have comprehensive hospital administrative databases linked to the death register. We used them to compare admission for HF and mortality for patients between 2006 and 2012 (n = 37,185 for Lombardy, 234,719 for England) with multistate models. Despite close similarities in age, sex and common comorbidities of the two sets of patients, in Lombardy, HF admissions were longer and more frequent per patient than in England, but short- and medium-term mortality was much lower. English patients had more very short stays, but their very elderly also had longer stays than their Lombardy counterparts. Using a three-state model, the predicted total time spent in hospital showed large differences between the countries: women in England spent an average of 24 days if aged 65 at first admission and 19 days if aged 85; in Lombardy these figures were 68 and 27 days respectively. Eight-state models suggested disease progression that appeared similar in each country. Differences by region within England were modest, with London patients spending more time in hospital and having lower mortality than the rest of England. Whilst clinical practice differences plausibly explain these patterns, we cannot confidently disentangle the impact of alternatives such as coding, casemix, and the availability and use of non-hospital settings. We need to better understand the links between rehospitalisation frequency and mortality.
NASA Astrophysics Data System (ADS)
Altun, Zikri; Bleda, Erdi; Trindle, Carl
2017-09-01
Gas phase conversion of acetylene to benzene, assisted by a single metal cation such as Fe(+), Ru(+) and Rh(+), offers an attractive prospect for application of computational modelling techniques to catalytic processes. Gas phase processes are not complicated by environmental effects and the participation of a single metal atom is a significant simplification. Still the process is complex, owing to the possibility of several low-energy spin states and the abundance of alternative structures. By density functional theory modelling using recently developed models with range and dispersion corrections, we locate and characterise a number of extreme points on the FeC6H6(+) surface, some of which have not been described previously. These include eta-1, eta-2 and eta-3 complexes of Fe(+) with the C4H4 ring. We identify new FeC6H6(+) structures as well, which may be landmarks for the Fe(+)-catalysed production of benzene from acetylene. The Fe(+) benzene complex is the most stable species on the FeC6H6 cation surface. With the abundant energy of complexation available in the isolated gas phase species, detachment of the Fe(+) and production of benzene can be efficient. We address the issue raised by other investigators whether multi-configurational self-consistent field methods are essential to the proper description of these systems. We find that the relative energy of intrinsically multi-determinant doublets is strongly affected, but judge that the density functional theory (DFT) description provides more accurate estimates of energetics and a more plausible reaction path.
Dickinson, David
2013-09-01
Despite three decades of public health promotion based on the scientific explanation of HIV/AIDS, alternative explanations of the disease continue to circulate. While these are seen as counter-productive to health education efforts, what is rarely analysed is their plurality and their tenacity. This article analyses the 'AIDS myths' collected by African HIV/AIDS workplace peer educators during an action research project. These beliefs about HIV/AIDS are organised, in this article, around core ideas that form the basis of 'folk' and 'lay theories' of HIV/AIDS. These constitute non-scientific explanations of HIV/AIDS, with folk theories drawing on bodies of knowledge that are independent of HIV/AIDS while lay theories are generated in response to the disease. A categorisation of alternative beliefs about HIV/AIDS is presented which comprises three folk theories - African traditional beliefs, Christian theology, and racial conspiracy - and three lay theories, all focused on avoiding HIV infection. Using this schema, the article describes how the plausibility of these alternative theories of HIV/AIDS lies not in their scientific validity, but in the robustness of the core idea at the heart of each folk or lay theory. Folk and lay theories of HIV/AIDS are also often highly palatable in that they provide hope and comfort in terms of prevention, cure, and the allocation of blame. This study argue that there is coherence and value to these alternative HIV/AIDS beliefs which should not be dismissed as ignorance, idle speculation or simple misunderstandings. A serious engagement with folk and lay theories of HIV/AIDS helps explain the continued circulation of alternative beliefs of HIV/AIDS and the slow uptake of behavioural change messages around the disease.
Schmidt, C Q; Herbert, A P; Hocking, H G; Uhrín, D; Barlow, P N
2008-01-01
The 155-kDa glycoprotein, complement factor H (CFH), is a regulator of complement activation that is abundant in human plasma. Three-dimensional structures of over half the 20 complement control protein (CCP) modules in CFH have been solved in the context of single-, double- and triple-module segments. Proven binding sites for C3b occupy the N and C termini of this elongated molecule and may be brought together by a bend in CFH mediated by its central CCP modules. The C-terminal CCP 20 is key to the ability of the molecule to adhere to polyanionic markers on self-surfaces where CFH acts to regulate amplification of the alternative pathway of complement. The surface patch on CCP 20 that binds to model glycosaminoglycans has been mapped using nuclear magnetic resonance (NMR), as has a second glycosaminoglycan-binding patch on CCP 7. These patches include many of the residue positions at which sequence variations have been linked to three complement-mediated disorders: dense deposit disease, age-related macular degeneration and atypical haemolytic uraemic syndrome. In one plausible model, CCP 20 anchors CFH to self-surfaces via a C3b/polyanion composite binding site, CCP 7 acts as a ‘proof-reader’ to help discriminate self- from non-self patterns of sulphation, and CCPs 1–4 disrupt C3/C5 convertase formation and stability. PMID:18081691
Social support as a mediator between job control and psychological strain.
Blanch, Angel
2016-05-01
Social support is a key influencing factor on health, and one of the main dimensions of the Demand - Control - Support (DCS) model within the occupational health field. The buffer hypothesis of the DCS determines that job control and social support relieve the effects of a high job demand on health. This hypothesis has been evaluated in several studies to predict worker's health, even though it has yielded ambiguous and inconclusive results. This study evaluated whether social support mediated the effect of job demand or job control on job strain. This mediation mechanism might represent a plausible and coherent alternative to the buffer hypothesis deserving to be analyzed within this field. Two models considering support as the mediator variable in the explanation of job strain were assessed with a group of administrative and technical workers (N = 281). While there was no evidence for support behaving as a mediator variable between demand and job strain, social support was a consistent mediator in the association of job control with job strain. The effect of job control on job strain was fully mediated by social support from supervisors and coworkers. The role of social support as a mediator implicates that the prevention of psychosocial stressors in the job place should place a stronger emphasis on improving social relationships at work. Copyright © 2016 Elsevier Ltd. All rights reserved.
Beauchamp, Kathryn G; Kahn, Lauren E; Berkman, Elliot T
2016-09-01
Inhibitory control (IC) is a critical neurocognitive skill for successfully navigating challenges across domains. Several studies have attempted to use training to improve neurocognitive skills such as IC, but few have found that training generalizes to performance on non-trained tasks. We used functional magnetic resonance imaging (fMRI) to investigate the effect of IC training on a related but untrained emotion regulation (ER) task with the goal of clarifying how training alters brain function and why its effects typically do not transfer across tasks. We suggest hypotheses for training-related changes in activation relevant to transfer effects: the strength model and several plausible alternatives (shifting priorities, stimulus-response automaticity, scaffolding). Sixty participants completed three weeks of IC training and underwent fMRI scanning before and after. The training produced pre- to post-training changes in neural activation during the ER task in the absence of behavioral changes. Specifically, individuals in the training group demonstrated reduced activation during ER in the left inferior frontal gyrus and supramarginal gyrus, key regions in the IC neural network. This result is less consistent with the strength model and more consistent with a motivational account. Implications for future work aiming to further pinpoint mechanisms of training transfer are discussed. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
An Alternative Approach to Analyze Ipsative Data. Revisiting Experiential Learning Theory.
Batista-Foguet, Joan M; Ferrer-Rosell, Berta; Serlavós, Ricard; Coenders, Germà; Boyatzis, Richard E
2015-01-01
The ritualistic use of statistical models regardless of the type of data actually available is a common practice across disciplines which we dare to call type zero error. Statistical models involve a series of assumptions whose existence is often neglected altogether, this is specially the case with ipsative data. This paper illustrates the consequences of this ritualistic practice within Kolb's Experiential Learning Theory (ELT) operationalized through its Learning Style Inventory (KLSI). We show how using a well-known methodology in other disciplines-compositional data analysis (CODA) and log ratio transformations-KLSI data can be properly analyzed. In addition, the method has theoretical implications: a third dimension of the KLSI is unveiled providing room for future research. This third dimension describes an individual's relative preference for learning by prehension rather than by transformation. Using a sample of international MBA students, we relate this dimension with another self-assessment instrument, the Philosophical Orientation Questionnaire (POQ), and with an observer-assessed instrument, the Emotional and Social Competency Inventory (ESCI-U). Both show plausible statistical relationships. An intellectual operating philosophy (IOP) is linked to a preference for prehension, whereas a pragmatic operating philosophy (POP) is linked to transformation. Self-management and social awareness competencies are linked to a learning preference for transforming knowledge, whereas relationship management and cognitive competencies are more related to approaching learning by prehension.
An Alternative Approach to Analyze Ipsative Data. Revisiting Experiential Learning Theory
Batista-Foguet, Joan M.; Ferrer-Rosell, Berta; Serlavós, Ricard; Coenders, Germà; Boyatzis, Richard E.
2015-01-01
The ritualistic use of statistical models regardless of the type of data actually available is a common practice across disciplines which we dare to call type zero error. Statistical models involve a series of assumptions whose existence is often neglected altogether, this is specially the case with ipsative data. This paper illustrates the consequences of this ritualistic practice within Kolb's Experiential Learning Theory (ELT) operationalized through its Learning Style Inventory (KLSI). We show how using a well-known methodology in other disciplines—compositional data analysis (CODA) and log ratio transformations—KLSI data can be properly analyzed. In addition, the method has theoretical implications: a third dimension of the KLSI is unveiled providing room for future research. This third dimension describes an individual's relative preference for learning by prehension rather than by transformation. Using a sample of international MBA students, we relate this dimension with another self-assessment instrument, the Philosophical Orientation Questionnaire (POQ), and with an observer-assessed instrument, the Emotional and Social Competency Inventory (ESCI-U). Both show plausible statistical relationships. An intellectual operating philosophy (IOP) is linked to a preference for prehension, whereas a pragmatic operating philosophy (POP) is linked to transformation. Self-management and social awareness competencies are linked to a learning preference for transforming knowledge, whereas relationship management and cognitive competencies are more related to approaching learning by prehension. PMID:26617561
NASA Astrophysics Data System (ADS)
Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel
2016-12-01
On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.
Xu, Kesheng; Maidana, Jean P.; Caviedes, Mauricio; Quero, Daniel; Aguirre, Pablo; Orio, Patricio
2017-01-01
In this article, we describe and analyze the chaotic behavior of a conductance-based neuronal bursting model. This is a model with a reduced number of variables, yet it retains biophysical plausibility. Inspired by the activity of cold thermoreceptors, the model contains a persistent Sodium current, a Calcium-activated Potassium current and a hyperpolarization-activated current (Ih) that drive a slow subthreshold oscillation. Driven by this oscillation, a fast subsystem (fast Sodium and Potassium currents) fires action potentials in a periodic fashion. Depending on the parameters, this model can generate a variety of firing patterns that includes bursting, regular tonic and polymodal firing. Here we show that the transitions between different firing patterns are often accompanied by a range of chaotic firing, as suggested by an irregular, non-periodic firing pattern. To confirm this, we measure the maximum Lyapunov exponent of the voltage trajectories, and the Lyapunov exponent and Lempel-Ziv's complexity of the ISI time series. The four-variable slow system (without spiking) also generates chaotic behavior, and bifurcation analysis shows that this is often originated by period doubling cascades. Either with or without spikes, chaos is no longer generated when the Ih is removed from the system. As the model is biologically plausible with biophysically meaningful parameters, we propose it as a useful tool to understand chaotic dynamics in neurons. PMID:28344550
On the distinguishability of HRF models in fMRI.
Rosa, Paulo N; Figueiredo, Patricia; Silvestre, Carlos J
2015-01-01
Modeling the Hemodynamic Response Function (HRF) is a critical step in fMRI studies of brain activity, and it is often desirable to estimate HRF parameters with physiological interpretability. A biophysically informed model of the HRF can be described by a non-linear time-invariant dynamic system. However, the identification of this dynamic system may leave much uncertainty on the exact values of the parameters. Moreover, the high noise levels in the data may hinder the model estimation task. In this context, the estimation of the HRF may be seen as a problem of model falsification or invalidation, where we are interested in distinguishing among a set of eligible models of dynamic systems. Here, we propose a systematic tool to determine the distinguishability among a set of physiologically plausible HRF models. The concept of absolutely input-distinguishable systems is introduced and applied to a biophysically informed HRF model, by exploiting the structure of the underlying non-linear dynamic system. A strategy to model uncertainty in the input time-delay and magnitude is developed and its impact on the distinguishability of two physiologically plausible HRF models is assessed, in terms of the maximum noise amplitude above which it is not possible to guarantee the falsification of one model in relation to another. Finally, a methodology is proposed for the choice of the input sequence, or experimental paradigm, that maximizes the distinguishability of the HRF models under investigation. The proposed approach may be used to evaluate the performance of HRF model estimation techniques from fMRI data.
DOT National Transportation Integrated Search
2001-06-30
Freight movements within large metropolitan areas are much less studied and analyzed than personal travel. This casts doubt on the results of much conventional travel demand modeling and planning. With so much traffic overlooked, how plausible are th...
Testing Adaptive Toolbox Models: A Bayesian Hierarchical Approach
ERIC Educational Resources Information Center
Scheibehenne, Benjamin; Rieskamp, Jorg; Wagenmakers, Eric-Jan
2013-01-01
Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanssen, Steef V.; Duden, Anna S.; Junginger, Martin
Several EU countries import wood pellets from the south-eastern United States. The imported wood pellets are (co-)fired in power plants with the aim of reducing overall greenhouse gas (GHG) emissions from electricity and meeting EU renewable energy targets. To assess whether GHG emissions are reduced and on what timescale, we construct the GHG balance of wood-pellet electricity. This GHG balance consists of supply chain and combustion GHG emissions, carbon sequestration during biomass growth, and avoided GHG emissions through replacing fossil electricity. We investigate wood pellets from four softwood feedstock types: small roundwood, commercial thinnings, harvest residues, and mill residues. Permore » feedstock, the GHG balance of wood-pellet electricity is compared against those of alternative scenarios. Alternative scenarios are combinations of alternative fates of the feedstock material, such as in-forest decomposition, or the production of paper or wood panels like oriented strand board (OSB). Alternative scenario composition depends on feedstock type and local demand for this feedstock. Results indicate that the GHG balance of wood-pellet electricity equals that of alternative scenarios within 0 to 21 years (the GHG parity time), after which wood-pellet electricity has sustained climate benefits. Parity times increase by a maximum of twelve years when varying key variables (emissions associated with paper and panels, soil carbon increase via feedstock decomposition, wood-pellet electricity supply chain emissions) within maximum plausible ranges. Using commercial thinnings, harvest residues or mill residues as feedstock leads to the shortest GHG parity times (0-6 years) and fastest GHG benefits from wood-pellet electricity. Here, we find shorter GHG parity times than previous studies, for we use a novel approach that differentiates feedstocks and considers alternative scenarios based on (combinations of) alternative feedstock fates, rather than on alternative land-uses. This novel approach is relevant for bioenergy derived from low-value feedstocks.« less
Economic irrationality is optimal during noisy decision making
Moreland, James; Chater, Nick; Usher, Marius; Summerfield, Christopher
2016-01-01
According to normative theories, reward-maximizing agents should have consistent preferences. Thus, when faced with alternatives A, B, and C, an individual preferring A to B and B to C should prefer A to C. However, it has been widely argued that humans can incur losses by violating this axiom of transitivity, despite strong evolutionary pressure for reward-maximizing choices. Here, adopting a biologically plausible computational framework, we show that intransitive (and thus economically irrational) choices paradoxically improve accuracy (and subsequent economic rewards) when decision formation is corrupted by internal neural noise. Over three experiments, we show that humans accumulate evidence over time using a “selective integration” policy that discards information about alternatives with momentarily lower value. This policy predicts violations of the axiom of transitivity when three equally valued alternatives differ circularly in their number of winning samples. We confirm this prediction in a fourth experiment reporting significant violations of weak stochastic transitivity in human observers. Crucially, we show that relying on selective integration protects choices against “late” noise that otherwise corrupts decision formation beyond the sensory stage. Indeed, we report that individuals with higher late noise relied more strongly on selective integration. These findings suggest that violations of rational choice theory reflect adaptive computations that have evolved in response to irreducible noise during neural information processing. PMID:26929353
Economic irrationality is optimal during noisy decision making.
Tsetsos, Konstantinos; Moran, Rani; Moreland, James; Chater, Nick; Usher, Marius; Summerfield, Christopher
2016-03-15
According to normative theories, reward-maximizing agents should have consistent preferences. Thus, when faced with alternatives A, B, and C, an individual preferring A to B and B to C should prefer A to C. However, it has been widely argued that humans can incur losses by violating this axiom of transitivity, despite strong evolutionary pressure for reward-maximizing choices. Here, adopting a biologically plausible computational framework, we show that intransitive (and thus economically irrational) choices paradoxically improve accuracy (and subsequent economic rewards) when decision formation is corrupted by internal neural noise. Over three experiments, we show that humans accumulate evidence over time using a "selective integration" policy that discards information about alternatives with momentarily lower value. This policy predicts violations of the axiom of transitivity when three equally valued alternatives differ circularly in their number of winning samples. We confirm this prediction in a fourth experiment reporting significant violations of weak stochastic transitivity in human observers. Crucially, we show that relying on selective integration protects choices against "late" noise that otherwise corrupts decision formation beyond the sensory stage. Indeed, we report that individuals with higher late noise relied more strongly on selective integration. These findings suggest that violations of rational choice theory reflect adaptive computations that have evolved in response to irreducible noise during neural information processing.
Ever Enrolled Medicare Population Estimates from the MCBS Access to Care Files
Petroski, Jason; Ferraro, David; Chu, Adam
2014-01-01
Objective The Medicare Current Beneficiary Survey’s (MCBS) Access to Care (ATC) file is designed to provide timely access to information on the Medicare population, yet because of the survey’s complex sampling design and expedited processing it is difficult to use the file to make both “always-enrolled” and “ever-enrolled” estimates on the Medicare population. In this study, we describe the ATC file and sample design, and we evaluate and review various alternatives for producing “ever-enrolled” estimates. Methods We created “ever enrolled” estimates for key variables in the MCBS using three separate approaches. We tested differences between the alternative approaches for statistical significance and show the relative magnitude of difference between approaches. Results Even when estimates derived from the different approaches were statistically different, the magnitude of the difference was often sufficiently small so as to result in little practical difference among the alternate approaches. However, when considering more than just the estimation method, there are advantages to using certain approaches over others. Conclusion There are several plausible approaches to achieving “ever-enrolled” estimates in the MCBS ATC file; however, the most straightforward approach appears to be implementation and usage of a new set of “ever-enrolled” weights for this file. PMID:24991484
Could plant extracts have enabled hominins to acquire honey before the control of fire?
Kraft, Thomas S; Venkataraman, Vivek V
2015-08-01
Honey is increasingly recognized as an important food item in human evolution, but it remains unclear whether extinct hominins could have overcome the formidable collective stinging defenses of honey bees during honey acquisition. The utility of smoke for this purpose is widely recognized, but little research has explored alternative methods of sting deterrence such as the use of plant secondary compounds. To consider whether hominins could have used plant extracts as a precursor or alternative to smoke, we review the ethnographic, ethnobotanical, and plant chemical ecology literature to examine how humans use plants in combination with, and independently of, smoke during honey collection. Plant secondary compounds are diverse in their physiological and behavioral effects on bees and differ fundamentally from those of smoke. Plants containing these chemicals are widespread and prove to be remarkably effective in facilitating honey collection by honey hunters and beekeepers worldwide. While smoke may be superior as a deterrent to bees, plant extracts represent a plausible precursor or alternative to the use of smoke during honey collection by hominins. Smoke is a sufficient but not necessary condition for acquiring honey in amounts exceeding those typically obtained by chimpanzees, suggesting that significant honey consumption could have predated the control of fire. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Costs of delivering allogenic blood in hospitals].
Hönemann, C; Bierbaum, M; Heidler, J; Doll, D; Schöffski, O
2013-05-01
In clinical practice there are medical and economic reasons against the thoughtless use of packed red blood cells (rbc). Therefore, in searching for alternatives (therapy of anemia) the total costs of allogeneic blood transfusions must be considered. Using a practical example this article depicts the actual costs and possible alternatives from the point of view of a hospital in Germany. To determine the total costs of allogeneic blood transfusions the actual resource consumption associated with blood transfusions was collated and analyzed at the St. Marien-Hospital in Vechta. The authors were able to show that the actual procurement costs (average. 97 EUR) represent only 55 % of the total costs of 176 EUR. The additional expenses are allocated to personnel (78 %) and materials (22 %). Alternatives, such as i.v. iron substitution or stimulation of erythropoesis might be the more economical solution especially if only purchase prices are compared and the total costs of allogeneic blood transfusions are not considered. Analyzing a single hospital limits generalization of the results; however, in the international context the results can be recognized as plausible. So far there have been no comprehensive studies on the true costs of blood preparations, therefore, this article represents a first starting point for closing this gap by conducting additional studies.
Qamar, A; LeBlanc, K; Semeniuk, O; Reznik, A; Lin, J; Pan, Y; Moewes, A
2017-10-13
We investigated the electronic structure of Lead Oxide (PbO) - one of the most promising photoconductor materials for direct conversion x-ray imaging detectors, using soft x-ray emission and absorption spectroscopy. Two structural configurations of thin PbO layers, namely the polycrystalline and the amorphous phase, were studied, and compared to the properties of powdered α-PbO and β-PbO samples. In addition, we performed calculations within the framework of density functional theory and found an excellent agreement between the calculated and the measured absorption and emission spectra, which indicates high accuracy of our structural models. Our work provides strong evidence that the electronic structure of PbO layers, specifically the width of the band gap and the presence of additional interband and intraband states in both conduction and valence band, depend on the deposition conditions. We tested several model structures using DFT simulations to understand what the origin of these states is. The presence of O vacancies is the most plausible explanation for these additional electronic states. Several other plausible models were ruled out including interstitial O, dislocated O and the presence of significant lattice stress in PbO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeRosa, C.T.; Choudhury, H.; Schoeny, R.S.
Risk assessment can be thought of as a conceptual approach to bridge the gap between the available data and the ultimate goal of characterizing the risk or hazard associated with a particular environmental problem. To lend consistency to and to promote quality in the process, the US Environmental Protection Agency (EPA) published Guidelines for Risk Assessment of Carcinogenicity, Developmental Toxicity, Germ Cell Mutagenicity and Exposure Assessment, and Risk Assessment of Chemical Mixtures. The guidelines provide a framework for organizing the information, evaluating data, and for carrying out the risk assessment in a scientifically plausible manner. In the absence of sufficientmore » scientific information or when abundant data are available, the guidelines provide alternative methodologies that can be employed in the risk assessment. 4 refs., 3 figs., 2 tabs.« less
Microbial oil - A plausible alternate resource for food and fuel application.
Bharathiraja, B; Sridharan, Sridevi; Sowmya, V; Yuvaraj, D; Praveenkumar, R
2017-06-01
Microbes have recourse to low-priced substrates like agricultural wastes and industrial efflux. A pragmatic approach towards an emerging field- the exploitation of microbial oils for biodiesel production, pharmaceutical and cosmetic applications, food additives, biopolymer production will be of immense remunerative significance in the near future. Due to high free fatty acid, nutritive content and simpler solvent extraction processes of microbial oils with plant oil, microbial oils can back plant oils in food applications. The purpose of this review is to evaluate the opulence of lipid production in native and standard micro-organisms and also to emphasize the vast array of applications including food and fuel by obtaining maximum yield. Copyright © 2017 Elsevier Ltd. All rights reserved.
Current issues and perspectives in food safety and risk assessment.
Eisenbrand, G
2015-12-01
In this review, current issues and opportunities in food safety assessment are discussed. Food safety is considered an essential element inherent in global food security. Hazard characterization is pivotal within the continuum of risk assessment, but it may be conceived only within a very limited frame as a true alternative to risk assessment. Elucidation of the mode of action underlying a given hazard is vital to create a plausible basis for human toxicology evaluation. Risk assessment, to convey meaningful risk communication, must be based on appropriate and reliable consideration of both exposure and mode of action. New perspectives, provided by monitoring human exogenous and endogenous exposure biomarkers, are considered of great promise to support classical risk extrapolation from animal toxicology. © The Author(s) 2015.
van Gestel, Aukje; Severens, Johan L; Webers, Carroll A B; Beckers, Henny J M; Jansonius, Nomdo M; Schouten, Jan S A G
2010-01-01
Discrete event simulation (DES) modeling has several advantages over simpler modeling techniques in health economics, such as increased flexibility and the ability to model complex systems. Nevertheless, these benefits may come at the cost of reduced transparency, which may compromise the model's face validity and credibility. We aimed to produce a transparent report on the construction and validation of a DES model using a recently developed model of ocular hypertension and glaucoma. Current evidence of associations between prognostic factors and disease progression in ocular hypertension and glaucoma was translated into DES model elements. The model was extended to simulate treatment decisions and effects. Utility and costs were linked to disease status and treatment, and clinical and health economic outcomes were defined. The model was validated at several levels. The soundness of design and the plausibility of input estimates were evaluated in interdisciplinary meetings (face validity). Individual patients were traced throughout the simulation under a multitude of model settings to debug the model, and the model was run with a variety of extreme scenarios to compare the outcomes with prior expectations (internal validity). Finally, several intermediate (clinical) outcomes of the model were compared with those observed in experimental or observational studies (external validity) and the feasibility of evaluating hypothetical treatment strategies was tested. The model performed well in all validity tests. Analyses of hypothetical treatment strategies took about 30 minutes per cohort and lead to plausible health-economic outcomes. There is added value of DES models in complex treatment strategies such as glaucoma. Achieving transparency in model structure and outcomes may require some effort in reporting and validating the model, but it is feasible.
Modelling Southern Ocean ecosystems: krill, the food-web, and the impacts of harvesting.
Hill, S L; Murphy, E J; Reid, K; Trathan, P N; Constable, A J
2006-11-01
The ecosystem approach to fisheries recognises the interdependence between harvested species and other ecosystem components. It aims to account for the propagation of the effects of harvesting through the food-web. The formulation and evaluation of ecosystem-based management strategies requires reliable models of ecosystem dynamics to predict these effects. The krill-based system in the Southern Ocean was the focus of some of the earliest models exploring such effects. It is also a suitable example for the development of models to support the ecosystem approach to fisheries because it has a relatively simple food-web structure and progress has been made in developing models of the key species and interactions, some of which has been motivated by the need to develop ecosystem-based management. Antarctic krill, Euphausia superba, is the main target species for the fishery and the main prey of many top predators. It is therefore critical to capture the processes affecting the dynamics and distribution of krill in ecosystem dynamics models. These processes include environmental influences on recruitment and the spatially variable influence of advection. Models must also capture the interactions between krill and its consumers, which are mediated by the spatial structure of the environment. Various models have explored predator-prey population dynamics with simplistic representations of these interactions, while others have focused on specific details of the interactions. There is now a pressing need to develop plausible and practical models of ecosystem dynamics that link processes occurring at these different scales. Many studies have highlighted uncertainties in our understanding of the system, which indicates future priorities in terms of both data collection and developing methods to evaluate the effects of these uncertainties on model predictions. We propose a modelling approach that focuses on harvested species and their monitored consumers and that evaluates model uncertainty by using alternative structures and functional forms in a Monte Carlo framework.
ERIC Educational Resources Information Center
Staub, Adrian; Rayner, Keith; Pollatsek, Alexander; Hyona, Jukka; Majewski, Helen
2007-01-01
Readers' eye movements were monitored as they read sentences containing noun-noun compounds that varied in frequency (e.g., elevator mechanic, mountain lion). The left constituent of the compound was either plausible or implausible as a head noun at the point at which it appeared, whereas the compound as a whole was always plausible. When the head…
Linden, Ariel
2018-05-11
Interrupted time series analysis (ITSA) is an evaluation methodology in which a single treatment unit's outcome is studied serially over time and the intervention is expected to "interrupt" the level and/or trend of that outcome. ITSA is commonly evaluated using methods which may produce biased results if model assumptions are violated. In this paper, treatment effects are alternatively assessed by using forecasting methods to closely fit the preintervention observations and then forecast the post-intervention trend. A treatment effect may be inferred if the actual post-intervention observations diverge from the forecasts by some specified amount. The forecasting approach is demonstrated using the effect of California's Proposition 99 for reducing cigarette sales. Three forecast models are fit to the preintervention series-linear regression (REG), Holt-Winters (HW) non-seasonal smoothing, and autoregressive moving average (ARIMA)-and forecasts are generated into the post-intervention period. The actual observations are then compared with the forecasts to assess intervention effects. The preintervention data were fit best by HW, followed closely by ARIMA. REG fit the data poorly. The actual post-intervention observations were above the forecasts in HW and ARIMA, suggesting no intervention effect, but below the forecasts in the REG (suggesting a treatment effect), thereby raising doubts about any definitive conclusion of a treatment effect. In a single-group ITSA, treatment effects are likely to be biased if the model is misspecified. Therefore, evaluators should consider using forecast models to accurately fit the preintervention data and generate plausible counterfactual forecasts, thereby improving causal inference of treatment effects in single-group ITSA studies. © 2018 John Wiley & Sons, Ltd.
A combined radio and GeV γ-ray view of the 2012 and 2013 flares of Mrk 421
Hovatta, Talvikki; Petropoulou, M.; Richards, J. L.; ...
2015-03-09
In 2012 Markarian 421 underwent the largest flare ever observed in this blazar at radio frequencies. In the present study, we start exploring this unique event and compare it to a less extreme event in 2013. We use 15 GHz radio data obtained with the Owens Valley Radio Observatory 40-m telescope, 95 GHz millimetre data from the Combined Array for Research in Millimeter-Wave Astronomy, and GeV γ-ray data from the Fermi Gamma-ray Space Telescope. Here, the radio light curves during the flaring periods in 2012 and 2013 have very different appearances, in both shape and peak flux density. Assuming thatmore » the radio and γ-ray flares are physically connected, we attempt to model the most prominent sub-flares of the 2012 and 2013 activity periods by using the simplest possible theoretical framework. We first fit a one-zone synchrotron self-Compton (SSC) model to the less extreme 2013 flare and estimate parameters describing the emission region. We then model the major γ-ray and radio flares of 2012 using the same framework. The 2012 γ-ray flare shows two distinct spikes of similar amplitude, so we examine scenarios associating the radio flare with each spike in turn. In the first scenario, we cannot explain the sharp radio flare with a simple SSC model, but we can accommodate this by adding plausible time variations to the Doppler beaming factor. In the second scenario, a varying Doppler factor is not needed, but the SSC model parameters require fine-tuning. Both alternatives indicate that the sharp radio flare, if physically connected to the preceding γ-ray flares, can be reproduced only for a very specific choice of parameters.« less
The origin of blueshifted absorption features in the X-ray spectrum of PG 1211+143: outflow or disc
NASA Astrophysics Data System (ADS)
Gallo, L. C.; Fabian, A. C.
2013-07-01
In some radio-quiet active galactic nuclei (AGN), high-energy absorption features in the X-ray spectra have been interpreted as ultrafast outflows (UFOs) - highly ionized material (e.g. Fe XXV and Fe XXVI) ejected at mildly relativistic velocities. In some cases, these outflows can carry energy in excess of the binding energy of the host galaxy. Needless to say, these features demand our attention as they are strong signatures of AGN feedback and will influence galaxy evolution. For the same reason, alternative models need to be discussed and refuted or confirmed. Gallo and Fabian proposed that some of these features could arise from resonance absorption of the reflected spectrum in a layer of ionized material located above and corotating with the accretion disc. Therefore, the absorbing medium would be subjected to similar blurring effects as seen in the disc. A priori, the existence of such plasma above the disc is as plausible as a fast wind. In this work, we highlight the ambiguity by demonstrating that the absorption model can describe the ˜7.6 keV absorption feature (and possibly other features) in the quasar PG 1211+143, an AGN that is often described as a classic example of a UFO. In this model, the 2-10 keV spectrum would be largely reflection dominated (as opposed to power law dominated in the wind models) and the resonance absorption would be originating in a layer between about 6 and 60 gravitational radii. The studies of such features constitute a cornerstone for future X-ray observatories like Astro-H and Athena+. Should our model prove correct, or at least important in some cases, then absorption will provide another diagnostic tool with which to probe the inner accretion flow with future missions.
A Three-phase Chemical Model of Hot Cores: The Formation of Glycine
NASA Astrophysics Data System (ADS)
Garrod, Robin T.
2013-03-01
A new chemical model is presented that simulates fully coupled gas-phase, grain-surface, and bulk-ice chemistry in hot cores. Glycine (NH2CH2COOH), the simplest amino acid, and related molecules such as glycinal, propionic acid, and propanal, are included in the chemical network. Glycine is found to form in moderate abundance within and upon dust-grain ices via three radical-addition mechanisms, with no single mechanism strongly dominant. Glycine production in the ice occurs over temperatures ~40-120 K. Peak gas-phase glycine fractional abundances lie in the range 8 × 10-11-8 × 10-9, occurring at ~200 K, the evaporation temperature of glycine. A gas-phase mechanism for glycine production is tested and found insignificant, even under optimal conditions. A new spectroscopic radiative-transfer model is used, allowing the translation and comparison of the chemical-model results with observations of specific sources. Comparison with the nearby hot-core source NGC 6334 IRS1 shows excellent agreement with integrated line intensities of observed species, including methyl formate. The results for glycine are consistent with the current lack of a detection of this molecule toward other sources; the high evaporation temperature of glycine renders the emission region extremely compact. Glycine detection with ALMA is predicted to be highly plausible, for bright, nearby sources with narrow emission lines. Photodissociation of water and subsequent hydrogen abstraction from organic molecules by OH, and NH2, are crucial to the buildup of complex organic species in the ice. The inclusion of alternative branches within the network of radical-addition reactions appears important to the abundances of hot-core molecules; less favorable branching ratios may remedy the anomalously high abundance of glycolaldehyde predicted by this and previous models.
NASA Astrophysics Data System (ADS)
Krewer, F.; Morgan, F.; Jones, E.; Glavin, M.; O'Halloran, M.
2014-05-01
Urinary incontinence is defined as the inability to stop the flow of urine from the bladder. In the US alone, the annual societal cost of incontinence-related care is estimated at 12.6 billion dollars. Clinicians agree that those suffering from urinary incontinence would greatly benefit from a wearable system that could continually monitor the bladder, providing continuous feedback to the patient. While existing ultrasound-based solutions are highly accurate, they are severely limited by form-factor, battery size, cost and ease of use. In this study the authors propose an alternative bladder-state sensing system, based on Ultra Wideband (UWB) Radar. As part of an initial proof-of-concept, the authors developed one of the first dielectrically and anatomically-representative Finite Difference Time Domain models of the pelvis. These models (one male and one female) are derived from Magnetic Resonance images provided by the IT'IS Foundation. These IT'IS models provide the foundation upon which an anatomically-plausible bladder growth model was constructed. The authors employed accurate multi-pole Debye models to simulate the dielectric properties of each of the pelvic tissues. Two-dimensional Finite Difference Time Domain (FDTD) simulations were completed for a range of bladder volumes. Relevant features were extracted from the FDTD-derived signals using Principle Component Analysis (PCA) and then classified using a k-Nearest-Neighbour and Support Vector Machine algorithms (incorporating the Leave-one-out cross-validation approach). Additionally the authors investigated the effects of signal fidelity, noise and antenna movement relative to the target as potential sources of error. The results of this initial study provide strong motivation for further research into this timely application, particularly in the context of an ageing population.
Park, Jong Suk; Kang, Ung Gu
2016-02-01
Traditionally, delusions have been considered to be the products of misinterpretation and irrationality. However, some theorists have argued that delusions are normal or rational cognitive responses to abnormal experiences. That is, when a recently experienced peculiar event is more plausibly explained by an extraordinary hypothesis, confidence in the veracity of this extraordinary explanation is reinforced. As the number of such experiences, driven by the primary disease process in the perceptual domain, increases, this confidence builds and solidifies, forming a delusion. We tried to understand the formation of delusions using a simulation based on Bayesian inference. We found that (1) even if a delusional explanation is only marginally more plausible than a non-delusional one, the repetition of the same experience results in a firm belief in the delusion. (2) The same process explains the systematization of delusions. (3) If the perceived plausibility of the explanation is not consistent but varies over time, the development of a delusion is delayed. Additionally, this model may explain why delusions are not corrected by persuasion or rational explanation. This Bayesian inference perspective can be considered a way to understand delusions in terms of rational human heuristics. However, such experiences of "rationality" can lead to irrational conclusions, depending on the characteristics of the subject. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gupta, Sebanti; Bhattacharjya, Surajit
2014-11-01
The sterile alpha motif or SAM domain is one of the most frequently present protein interaction modules with diverse functional attributions. SAM domain of the Ste11 protein of budding yeast plays important roles in mitogen-activated protein kinase cascades. In the current study, urea-induced, at subdenaturing concentrations, structural, and dynamical changes in the Ste11 SAM domain have been investigated by nuclear magnetic resonance spectroscopy. Our study revealed that a number of residues from Helix 1 and Helix 5 of the Ste11 SAM domain display plausible alternate conformational states and largest chemical shift perturbations at low urea concentrations. Amide proton (H/D) exchange experiments indicated that Helix 1, loop, and Helix 5 become more susceptible to solvent exchange with increased concentrations of urea. Notably, Helix 1 and Helix 5 are directly involved in binding interactions of the Ste11 SAM domain. Our data further demonstrate that the existence of alternate conformational states around the regions involved in dimeric interactions in native or near native conditions. © 2014 Wiley Periodicals, Inc.
Real Option in Capital Budgeting for SMEs: Insight from Steel Company
NASA Astrophysics Data System (ADS)
Muharam, F. M.; Tarrazon, M. A.
2017-06-01
Complex components of investment projects can only be analysed accurately if flexibility and comprehensive consideration of uncertainty are incorporated into valuation. Discounted cash flow (DCF) analysis has failed to cope with strategic future alternatives that affect the right value of investment projects. Real option valuation (ROV) proves to be the right tool for this purpose since it enables to calculate the enlarged or strategic Net Present Value (ENPV). This study attempts to provide an insight of the usage of ROV in capital budgeting and investment decision-making processes of SMEs. Exploring into the first stage processing of steel industry, analysis of alternatives to cancel, to expand, to defer or to abandon is performed. Completed with multiple options interaction and a sensitivity analysis, our findings prove that the application of ROV is beneficial for complex investment projects independently from the size of the company and particularly suitable in scenarios with scarce resources. The application of Real Option Valuation (ROV) is plausible and beneficial for SMEs to be incorporated in the strategic decision making process.
NASA Technical Reports Server (NTRS)
Niki, Hiromi
1990-01-01
Tropospheric chemical transformations of alternative hydrofluorocarbons (HCF's) and hydrochlorofluorocarbons (HCFC's) are governed by hydroxyl radical initiated oxidation processes, which are likely to be analogous to those known for alkanes and chloroalkanes. A schematic diagram is used to illustrate plausible reaction mechanisms for their atmospheric degradation, where R, R', and R'' denote the F- and/or Cl-substituted alkyl groups derived from HCF's and HCFC's subsequent th the initial H atom abstraction by HO radicals. At present, virtually no kinetic data exist for the majority of these reactions, particularly for those involving RO. Potential degradation intermediates and final products include a large variety of fluorine- and/or chlorine-containing carbonyls, acids, peroxy acids, alcohols, hydrogen peroxides, nitrates and peroxy nitrates, as summarized in the attached table. Probably atmospheric lifetimes of these compounds were also estimated. For some carbonyl and nitrate products shown in this table, there seem to be no significant gas-phase removal mechanisms. Further chemical kinetics and photochemical data are needed to quantitatively assess the atmospheric fate of HCF's and HCFC's, and of the degradation products postulated in this report.
Quantifying facial expression recognition across viewing conditions.
Goren, Deborah; Wilson, Hugh R
2006-04-01
Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other.
NASA Astrophysics Data System (ADS)
Byrd, K. B.; Kreitler, J.; Labiosa, W.
2010-12-01
A scenario represents an account of a plausible future given logical assumptions about how conditions change over discrete bounds of space and time. Development of multiple scenarios provides a means to identify alternative directions of urban growth that account for a range of uncertainty in human behavior. Interactions between human and natural processes may be studied by coupling urban growth scenario outputs with biophysical change models; if growth scenarios encompass a sufficient range of alternative futures, scenario assumptions serve to constrain the uncertainty of biophysical models. Spatially explicit urban growth models (map-based) produce output such as distributions and densities of residential or commercial development in a GIS format that can serve as input to other models. Successful fusion of growth model outputs with other model inputs requires that both models strategically address questions of interest, incorporate ecological feedbacks, and minimize error. The U.S. Geological Survey (USGS) Puget Sound Ecosystem Portfolio Model (PSEPM) is a decision-support tool that supports land use and restoration planning in Puget Sound, Washington, a 35,500 sq. km region. The PSEPM couples future scenarios of urban growth with statistical, process-based and rule-based models of nearshore biophysical changes and ecosystem services. By using a multi-criteria approach, the PSEPM identifies cross-system and cumulative threats to the nearshore environment plus opportunities for conservation and restoration. Sub-models that predict changes in nearshore biophysical condition were developed and existing models were integrated to evaluate three growth scenarios: 1) Status Quo, 2) Managed Growth, and 3) Unconstrained Growth. These decadal scenarios were developed and projected out to 2060 at Oregon State University using the GIS-based ENVISION model. Given land management decisions and policies under each growth scenario, the sub-models predicted changes in 1) fecal coliform in shellfish growing areas, 2) sediment supply to beaches, 3) State beach recreational visits, 4) eelgrass habitat suitability, 5) forage fish habitat suitability, and 6) nutrient loadings. In some cases thousands of shoreline units were evaluated with multiple predictive models, creating a need for streamlined and consistent database development and data processing. Model development over multiple disciplines demonstrated the challenge of merging data types from multiple sources that were inconsistent in spatial and temporal resolution, classification schemes, and topology. Misalignment of data in space and time created potential for error and misinterpretation of results. This effort revealed that the fusion of growth scenarios and biophysical models requires an up-front iterative adjustment of both scenarios and models so that growth model outputs provide the needed input data in the correct format. Successful design of data flow across models that includes feedbacks between human and ecological systems was found to enhance the use of the final data product for decision making.
A quantitative dynamic systems model of health-related quality of life among older adults
Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela
2015-01-01
Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722
Effects of plausibility on structural priming.
Christianson, Kiel; Luke, Steven G; Ferreira, Fernanda
2010-03-01
We report a replication and extension of Ferreira (2003), in which it was observed that native adult English speakers misinterpret passive sentences that relate implausible but not impossible semantic relationships (e.g., The angler was caught by the fish) significantly more often than they do plausible passives or plausible or implausible active sentences. In the experiment reported here, participants listened to the same plausible and implausible passive and active sentences as in Ferreira (2003), answered comprehension questions, and then orally described line drawings of simple transitive actions. The descriptions were analyzed as a measure of structural priming (Bock, 1986). Question accuracy data replicated Ferreira (2003). Production data yielded an interaction: Passive descriptions were produced more often after plausible passives and implausible actives. We interpret these results as indicative of a language processor that proceeds along differentiated morphosyntactic and semantic routes. The processor may end up adjudicating between conflicting outputs from these routes by settling on a "good enough" representation that is not completely faithful to the input.
Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo
2006-01-01
In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.
Modelling Trial-by-Trial Changes in the Mismatch Negativity
Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.
2013-01-01
The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989
An object-oriented software for fate and exposure assessments.
Scheil, S; Baumgarten, G; Reiter, B; Schwartz, S; Wagner, J O; Trapp, S; Matthies, M
1995-07-01
The model system CemoS(1) (Chemical Exposure Model System) was developed for the exposure prediction of hazardous chemicals released to the environment. Eight different models were implemented involving chemicals fate simulation in air, water, soil and plants after continuous or single emissions from point and diffuse sources. Scenario studies are supported by a substance and an environmental data base. All input data are checked on their plausibility. Substance and environmental process estimation functions facilitate generic model calculations. CemoS is implemented in a modular structure using object-oriented programming.
The Big Bang and Cosmic Inflation
NASA Astrophysics Data System (ADS)
Guth, Alan H.
2014-03-01
A summary is given of the key developments of cosmology in the 20th century, from the work of Albert Einstein to the emergence of the generally accepted hot big bang model. The successes of this model are reviewed, but emphasis is placed on the questions that the model leaves unanswered. The remainder of the paper describes the inflationary universe model, which provides plausible answers to a number of these questions. It also offers a possible explanation for the origin of essentially all the matter and energy in the observed universe.
The Plausibility of a String Quartet Performance in Virtual Reality.
Bergstrom, Ilias; Azevedo, Sergio; Papiotis, Panos; Saldanha, Nuno; Slater, Mel
2017-04-01
We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. 'Plausibility' refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant's movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.
Schmid, Annina B; Coppieters, Michel W
2011-12-01
A high prevalence of dual nerve disorders is frequently reported. How a secondary nerve disorder may develop following a primary nerve disorder remains largely unknown. Although still frequently cited, most explanatory theories were formulated many years ago. Considering recent advances in neuroscience, it is uncertain whether these theories still reflect current expert opinion. A Delphi study was conducted to update views on potential mechanisms underlying dual nerve disorders. In three rounds, seventeen international experts in the field of peripheral nerve disorders were asked to list possible mechanisms and rate their plausibility. Mechanisms with a median plausibility rating of ≥7 out of 10 were considered highly plausible. The experts identified fourteen mechanisms associated with a first nerve disorder that may predispose to the development of another nerve disorder. Of these fourteen mechanisms, nine have not previously been linked to double crush. Four mechanisms were considered highly plausible (impaired axonal transport, ion channel up or downregulation, inflammation in the dorsal root ganglia and neuroma-in-continuity). Eight additional mechanisms were listed which are not triggered by a primary nerve disorder, but may render the nervous system more vulnerable to multiple nerve disorders, such as systemic diseases and neurotoxic medication. Even though many mechanisms were classified as plausible or highly plausible, overall plausibility ratings varied widely. Experts indicated that a wide range of mechanisms has to be considered to better understand dual nerve disorders. Previously listed theories cannot be discarded, but may be insufficient to explain the high prevalence of dual nerve disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.
Computational approaches to cognition: the bottom-up view.
Koch, C
1993-04-01
How can higher level aspects of cognition, such as figure-ground segregation, object recognition, selective focal attention and ultimately even awareness, be implemented at the level of synapses and neurons? A number of theoretical studies emerging out of the connectionist and the computational neuroscience communities are starting to address these issues using neural plausible models.
Memory colours affect colour appearance.
Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R
2016-01-01
Memory colour effects show that colour perception is affected by memory and prior knowledge and hence by cognition. None of Firestone & Scholl's (F&S's) potential pitfalls apply to our work on memory colours. We present a Bayesian model of colour appearance to illustrate that an interaction between perception and memory is plausible from the perspective of vision science.
Topic Models in Information Retrieval
2007-08-01
Information Processing Systems, Cambridge, MA, MIT Press, 2004. Brown, P.F., Della Pietra, V.J., deSouza, P.V., Lai, J.C. and Mercer, R.L., Class-based...2003. http://www.wkap.nl/prod/b/1-4020-1216-0. Croft, W.B., Lucia , T.J., Cringean, J., and Willett, P., Retrieving Documents By Plausible Inference
Some Additional Lessons from the Wechsler Scales: A Rejoinder to Kaufman and Keith.
ERIC Educational Resources Information Center
Macmann, Gregg M.; Barnett, David W.
1994-01-01
Reacts to previous arguments regarding verbal and performance constructs of Wechsler Scales. Contends that general factor model is more plausible representation of data for these scales. Suggests issue is moot when considered in regards to practical applications. Supports analysis of needed skills and instructional environments in educational…
Enrollment Simulation and Planning. Strategies & Solutions Series, No. 3.
ERIC Educational Resources Information Center
McIntyre, Chuck
Enrollment simulation and planning (ESP) is centered on the use of statistical models to describe how and why college enrollments fluctuate. College planners may use this approach with confidence to simulate any number of plausible future scenarios. Planners can then set a variety of possible college actions against these scenarios, and examine…
Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…
'Where's the flux' star: Exocomets, or Giant Impact?
NASA Astrophysics Data System (ADS)
Meng, Huan; Boyajian, Tabetha; Kennedy, Grant; Lisse, Carey; Marengo, Massimo; Wright, Jason; Wyatt, Mark
2015-12-01
The discovery of an unusual stellar light curve in the Kepler data of KIC 8462852 has sparked a media frenzy about 'alien megastructures' orbiting that star. Behind the public's excitement about 'aliens,' there is however a true science story: KIC 8462852 offers us a unique window to observe, in real time, the rare cataclysmic events happening in a mature extrasolar planetary system. After analysis of the existing constraints of the system, two possible models stand out as the plausible explanations for the light curve anomaly: immediate aftermath of a large planetary or planetesimal impact, or apparitions of a family of comets or comet fragments. The two plausible models predict very different IR evolution over the years following the transit events, providing a good diagnostic to distinguish them. With shallow mapping of the Kepler field in January 2015, Spitzer/IRAC has found KIC 8462852 with a marginal excess at 4.5 micron. Here, we propose to monitor KIC 8462852 on a regular basis to identify and track its IR excess evolution with deeper images and more accurate photometry.
Controls on the Archean climate system investigated with a global climate model.
Wolf, E T; Toon, O B
2014-03-01
The most obvious means of resolving the faint young Sun paradox is to invoke large quantities of greenhouse gases, namely, CO2 and CH4. However, numerous changes to the Archean climate system have been suggested that may have yielded additional warming, thus easing the required greenhouse gas burden. Here, we use a three-dimensional climate model to examine some of the factors that controlled Archean climate. We examine changes to Earth's rotation rate, surface albedo, cloud properties, and total atmospheric pressure following proposals from the recent literature. While the effects of increased planetary rotation rate on surface temperature are insignificant, plausible changes to the surface albedo, cloud droplet number concentrations, and atmospheric nitrogen inventory may each impart global mean warming of 3-7 K. While none of these changes present a singular solution to the faint young Sun paradox, a combination can have a large impact on climate. Global mean surface temperatures at or above 288 K could easily have been maintained throughout the entirety of the Archean if plausible changes to clouds, surface albedo, and nitrogen content occurred.
NASA Astrophysics Data System (ADS)
Groves, David G.; Yates, David; Tebaldi, Claudia
2008-12-01
Climate change may impact water resources management conditions in difficult-to-predict ways. A key challenge for water managers is how to incorporate highly uncertain information about potential climate change from global models into local- and regional-scale water management models and tools to support local planning. This paper presents a new method for developing large ensembles of local daily weather that reflect a wide range of plausible future climate change scenarios while preserving many statistical properties of local historical weather patterns. This method is demonstrated by evaluating the possible impact of climate change on the Inland Empire Utilities Agency service area in southern California. The analysis shows that climate change could impact the region, increasing outdoor water demand by up to 10% by 2040, decreasing local water supply by up to 40% by 2040, and decreasing sustainable groundwater yields by up to 15% by 2040. The range of plausible climate projections suggests the need for the region to augment its long-range water management plans to reduce its vulnerability to climate change.
Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.
2015-01-01
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164
Models, Data, and War: a Critique of the Foundation for Defense Analyses.
1980-03-12
scientific formulation 6 An "objective" solution 8 Analysis of a squishy problem 9 A judgmental formulation 9 A potential for distortion 11 A subjective...inextricably tied to those judgments. Different analysts, with apparently identical knowledge of a real world problem, may develop plausible formulations ...configured is a concrete theoretical statement." 2/ The formulation of a computer model--conceiving a mathematical representation of the real world
Projections of Rapidly Rising Temperatures over Africa Under Low Mitigation
NASA Technical Reports Server (NTRS)
Engelbrecht, Francois; Adegoke, Jimmy; Bopape, Mary-Jane; Naidoo, Mogesh; Garland, Rebecca; Thatcher, Marcus; McGregor, John; Katzfe, Jack; Werner, Micha; Ichoku, Charles;
2015-01-01
An analysis of observed trends in African annual-average near-surface temperatures over the last five decades reveals drastic increases, particularly over parts of the subtropics and central tropical Africa. Over these regions, temperatures have been rising at more than twice the global rate of temperature increase. An ensemble of high-resolution downscalings, obtained using a single regional climate model forced with the sea-surface temperatures and sea-ice fields of an ensemble of global circulation model (GCM) simulations, is shown to realistically represent the relatively strong temperature increases observed in subtropical southern and northern Africa. The amplitudes of warming are generally underestimated, however. Further warming is projected to occur during the 21st century, with plausible increases of 4-6 C over the subtropics and 3-5 C over the tropics by the end of the century relative to present-day climate under the A2 (a low mitigation) scenario of the Special Report on Emission Scenarios. High impact climate events such as heat-wave days and high fire-danger days are consistently projected to increase drastically in their frequency of occurrence. General decreases in soil-moisture availability are projected, even for regions where increases in rainfall are plausible, due to enhanced levels of evaporation. The regional downscalings presented here, and recent GCM projections obtained for Africa, indicate that African annual-averaged temperatures may plausibly rise at about 1.5 times the global rate of temperature increase in the subtropics, and at a somewhat lower rate in the tropics. These projected increases although drastic, may be conservative given the model underestimations of observed temperature trends. The relatively strong rate of warming over Africa, in combination with the associated increases in extreme temperature events, may be key factors to consider when interpreting the suitability of global mitigation targets in terms of African climate change and climate change adaptation in Africa.
NASA Astrophysics Data System (ADS)
Loheide, S. P.; Booth, E. G.; Kucharik, C. J.; Carpenter, S. R.; Gries, C.; Katt-Reinders, E.; Rissman, A. R.; Turner, M. G.
2011-12-01
Dynamic hydrological processes play a critical role in the structure and functioning of agricultural watersheds undergoing urbanization. Developing a predictive understanding of the complex interaction between agricultural productivity, ecosystem health, water quality, urban development, and public policy requires an interdisciplinary effort that investigates the important biophysical and social processes of the system. Our research group has initiated such a framework that includes a coordinated program of integrated scenarios, model experiments to assess the effects of changing drivers on a broad set of ecosystem services, evaluations of governance and leverage points, outreach and public engagement, and information management. Our geographic focus is the Yahara River watershed in south-central Wisconsin, which is an exemplar of water-related issues in the Upper Midwest. This research addresses three specific questions. 1) How do different patterns of land use, land cover, land management, and water resources engineering practices affect the resilience and sensitivity of ecosystem services under a changing climate? 2) How can regional governance systems for water and land use be made more resilient and adaptive to meet diverse human needs? 3) In what ways are regional human-environment systems resilient and in what ways are they vulnerable to potential changes in climate and water resources? A comprehensive program of model experiments and biophysical measurements will be utilized to evaluate changes in five freshwater ecosystem services (flood regulation, groundwater recharge, surface water quality, groundwater quality, and lake recreation) and five related ecosystem services (food crop yields, bioenergy crop yields, carbon storage in soil, albedo, and terrestrial recreation). Novel additions to existing biophysical models will allow us to simulate all components of the hydrological cycle as well as agricultural productivity, nitrogen and phosphorus transport, and lake water quality. The integrated model will be validated using a comprehensive observational database that includes soil moisture, evapotranspiration, stomatal conductance, streamflow, stream and lake water quality, and crop yields and productivity. Integrated scenarios will be developed to synthesize decision-maker perspectives, alternative approaches to resource governance, plausible trends in demographic and economic drivers, and model projections under alternate climate and land use regimes to understand future conditions of the watershed and its ecosystem services. The quantitative data and integrated scenarios will then be linked to evaluate governance of water and land use.
Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.
Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil
2016-10-01
We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.
The neural optimal control hierarchy for motor control
NASA Astrophysics Data System (ADS)
DeWolf, T.; Eliasmith, C.
2011-10-01
Our empirical, neuroscientific understanding of biological motor systems has been rapidly growing in recent years. However, this understanding has not been systematically mapped to a quantitative characterization of motor control based in control theory. Here, we attempt to bridge this gap by describing the neural optimal control hierarchy (NOCH), which can serve as a foundation for biologically plausible models of neural motor control. The NOCH has been constructed by taking recent control theoretic models of motor control, analyzing the required processes, generating neurally plausible equivalent calculations and mapping them on to the neural structures that have been empirically identified to form the anatomical basis of motor control. We demonstrate the utility of the NOCH by constructing a simple model based on the identified principles and testing it in two ways. First, we perturb specific anatomical elements of the model and compare the resulting motor behavior with clinical data in which the corresponding area of the brain has been damaged. We show that damaging the assigned functions of the basal ganglia and cerebellum can cause the movement deficiencies seen in patients with Huntington's disease and cerebellar lesions. Second, we demonstrate that single spiking neuron data from our model's motor cortical areas explain major features of single-cell responses recorded from the same primate areas. We suggest that together these results show how NOCH-based models can be used to unify a broad range of data relevant to biological motor control in a quantitative, control theoretic framework.
Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP
Staras, Kevin
2016-01-01
We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture. PMID:27760125
Constrained inversion as a hypothesis testing tool, what can we learn about the lithosphere?
NASA Astrophysics Data System (ADS)
Moorkamp, Max; Stewart, Fishwick; Jones, Alan G.
2017-04-01
Inversion of geophysical data constrained by a reference model is typically used to guide the inversion of low resolution data towards a geologically plausible solution. For example, a migrated seismic section can provide the location of lithological boundaries for potential field inversions. Here we consider the inversion of long-period magnetotelluric data constrained by models generated through surface wave inversion. In this case, we do not consider the surface wave model inherently better in any sense and want to guide the magnetotelluric inversion towards this model, but we want to test the hypothesis that both datasets can be explained by models with similar structure. If the hypothesis test is successful, i.e. we can fit the observations with a conductivity model with structural similarity to the seismic model, we have found an alternative explanation compared to the individual inversion and can use the differences to learn about the resolution of the magnetotelluric data and can improve our interpretation. Conversely, if the test refutes our hypothesis of coincident structure, we have found features in the models that are sensed fundamentally different by both methods which is potentially instructive on the nature of the anomalies. We use a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons together with a tomographic model for the region to illustrate and test this approach. Here, various conductive structures have been identified that bridge the Moho. Furthermore, the thickness of the lithosphere inferred from the different methods differs. In both cases the question is in how far this is a result of the ill-posed nature of inversion and in how far these differences can be reconciled. Thus this dataset is an ideal test case for our hypothesis testing approach. Finally, we will demonstrate how we can use the results of the constrained inversion to extract conductivity-velocity relationships in the region and gain further insight into the composition and thermal structure of the lithosphere.
NASA Astrophysics Data System (ADS)
Haverd, V.; Smith, B.; Nieradzik, L. P.; Briggs, P. R.
2014-08-01
Poorly constrained rates of biomass turnover are a key limitation of Earth system models (ESMs). In light of this, we recently proposed a new approach encoded in a model called Populations-Order-Physiology (POP), for the simulation of woody ecosystem stand dynamics, demography and disturbance-mediated heterogeneity. POP is suitable for continental to global applications and designed for coupling to the terrestrial ecosystem component of any ESM. POP bridges the gap between first-generation dynamic vegetation models (DVMs) with simple large-area parameterisations of woody biomass (typically used in current ESMs) and complex second-generation DVMs that explicitly simulate demographic processes and landscape heterogeneity of forests. The key simplification in the POP approach, compared with second-generation DVMs, is to compute physiological processes such as assimilation at grid-scale (with CABLE (Community Atmosphere Biosphere Land Exchange) or a similar land surface model), but to partition the grid-scale biomass increment among age classes defined at sub-grid-scale, each subject to its own dynamics. POP was successfully demonstrated along a savanna transect in northern Australia, replicating the effects of strong rainfall and fire disturbance gradients on observed stand productivity and structure. Here, we extend the application of POP to wide-ranging temporal and boreal forests, employing paired observations of stem biomass and density from forest inventory data to calibrate model parameters governing stand demography and biomass evolution. The calibrated POP model is then coupled to the CABLE land surface model, and the combined model (CABLE-POP) is evaluated against leaf-stem allometry observations from forest stands ranging in age from 3 to 200 year. Results indicate that simulated biomass pools conform well with observed allometry. We conclude that POP represents an ecologically plausible and efficient alternative to large-area parameterisations of woody biomass turnover, typically used in current ESMs.
Neural network modeling of associative memory: Beyond the Hopfield model
NASA Astrophysics Data System (ADS)
Dasgupta, Chandan
1992-07-01
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.
McCollum, Eric D.; Nambiar, Bejoy; Deula, Rashid; Zadutsa, Beatiwel; Bondo, Austin; King, Carina; Beard, James; Liyaya, Harry; Mankhambo, Limangeni; Lazzerini, Marzia; Makwenda, Charles; Masache, Gibson; Bar-Zeev, Naor; Kazembe, Peter N.; Mwansambo, Charles; Lufesi, Norman; Costello, Anthony; Armstrong, Ben
2017-01-01
Background The pneumococcal conjugate vaccine’s (PCV) impact on childhood pneumonia during programmatic conditions in Africa is poorly understood. Following PCV13 introduction in Malawi in November 2011, we evaluated the case burden and rates of childhood pneumonia. Methods and Findings Between January 1, 2012-June 30, 2014 we conducted active pneumonia surveillance in children <5 years at seven hospitals, 18 health centres, and with 38 community health workers in two districts, central Malawi. Eligible children had clinical pneumonia per Malawi guidelines, defined as fast breathing only, chest indrawing +/- fast breathing, or, ≥1 clinical danger sign. Since pulse oximetry was not in the Malawi guidelines, oxygenation <90% defined hypoxemic pneumonia, a distinct category from clinical pneumonia. We quantified the pneumonia case burden and rates in two ways. We compared the period immediately following vaccine introduction (early) to the period with >75% three-dose PCV13 coverage (post). We also used multivariable time-series regression, adjusting for autocorrelation and exploring seasonal variation and alternative model specifications in sensitivity analyses. The early versus post analysis showed an increase in cases and rates of total, fast breathing, and indrawing pneumonia and a decrease in danger sign and hypoxemic pneumonia, and pneumonia mortality. At 76% three-dose PCV13 coverage, versus 0%, the time-series model showed a non-significant increase in total cases (+47%, 95% CI: -13%, +149%, p = 0.154); fast breathing cases increased 135% (+39%, +297%, p = 0.001), however, hypoxemia fell 47% (-5%, -70%, p = 0.031) and hospital deaths decreased 36% (-1%, -58%, p = 0.047) in children <5 years. We observed a shift towards disease without danger signs, as the proportion of cases with danger signs decreased by 65% (-46%, -77%, p<0.0001). These results were generally robust to plausible alternative model specifications. Conclusions Thirty months after PCV13 introduction in Malawi, the health system burden and rates of the severest forms of childhood pneumonia, including hypoxemia and death, have markedly decreased. PMID:28052071
Stirrup, Oliver T; Babiker, Abdel G; Carpenter, James R; Copas, Andrew J
2016-04-30
Longitudinal data are widely analysed using linear mixed models, with 'random slopes' models particularly common. However, when modelling, for example, longitudinal pre-treatment CD4 cell counts in HIV-positive patients, the incorporation of non-stationary stochastic processes such as Brownian motion has been shown to lead to a more biologically plausible model and a substantial improvement in model fit. In this article, we propose two further extensions. Firstly, we propose the addition of a fractional Brownian motion component, and secondly, we generalise the model to follow a multivariate-t distribution. These extensions are biologically plausible, and each demonstrated substantially improved fit on application to example data from the Concerted Action on SeroConversion to AIDS and Death in Europe study. We also propose novel procedures for residual diagnostic plots that allow such models to be assessed. Cohorts of patients were simulated from the previously reported and newly developed models in order to evaluate differences in predictions made for the timing of treatment initiation under different clinical management strategies. A further simulation study was performed to demonstrate the substantial biases in parameter estimates of the mean slope of CD4 decline with time that can occur when random slopes models are applied in the presence of censoring because of treatment initiation, with the degree of bias found to depend strongly on the treatment initiation rule applied. Our findings indicate that researchers should consider more complex and flexible models for the analysis of longitudinal biomarker data, particularly when there are substantial missing data, and that the parameter estimates from random slopes models must be interpreted with caution. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza
2017-01-01
Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians' experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mechanisms apply patterns from human thought processes, such as generalization, similarity and interpolation, based on attributional, hierarchical, and relational knowledge. Plausible reasoning mechanisms include inductive reasoning , which generalizes the commonalities among the data to induce new rules, and analogical reasoning , which is guided by data similarities to infer new facts. By further leveraging rich, biomedical Semantic Web ontologies to represent medical knowledge, both known and tentative, we increase the accuracy and expressivity of plausible reasoning, and cope with issues such as data heterogeneity, inconsistency and interoperability. In this paper, we present a Semantic Web-based, multi-strategy reasoning approach, which integrates deductive and plausible reasoning and exploits Semantic Web technology to solve complex clinical decision support queries. We evaluated our system using a real-world medical dataset of patients with hepatitis, from which we randomly removed different percentages of data (5%, 10%, 15%, and 20%) to reflect scenarios with increasing amounts of incomplete medical knowledge. To increase the reliability of the results, we generated 5 independent datasets for each percentage of missing values, which resulted in 20 experimental datasets (in addition to the original dataset). The results show that plausibly inferred knowledge extends the coverage of the knowledge base by, on average, 2%, 7%, 12%, and 16% for datasets with, respectively, 5%, 10%, 15%, and 20% of missing values. This expansion in the KB coverage allowed solving complex disease diagnostic queries that were previously unresolvable, without losing the correctness of the answers. However, compared to deductive reasoning, data-intensive plausible reasoning mechanisms yield a significant performance overhead. We observed that plausible reasoning approaches, by generating tentative inferences and leveraging domain knowledge of experts, allow us to extend the coverage of medical knowledge bases, resulting in improved clinical decision support. Second, by leveraging OWL ontological knowledge, we are able to increase the expressivity and accuracy of plausible reasoning methods. Third, our approach is applicable to clinical decision support systems for a range of chronic diseases.
A safety rule approach to surveillance and eradication of biological invasions
Haight, Robert G.; Koch, Frank H.; Venette, Robert; Studens, Kala; Fournier, Ronald E.; Swystun, Tom; Turgeon, Jean J.
2017-01-01
Uncertainty about future spread of invasive organisms hinders planning of effective response measures. We present a two-stage scenario optimization model that accounts for uncertainty about the spread of an invader, and determines survey and eradication strategies that minimize the expected program cost subject to a safety rule for eradication success. The safety rule includes a risk standard for the desired probability of eradication in each invasion scenario. Because the risk standard may not be attainable in every scenario, the safety rule defines a minimum proportion of scenarios with successful eradication. We apply the model to the problem of allocating resources to survey and eradicate the Asian longhorned beetle (ALB, Anoplophora glabripennis) after its discovery in the Greater Toronto Area, Ontario, Canada. We use historical data on ALB spread to generate a set of plausible invasion scenarios that characterizes the uncertainty of the beetle’s extent. We use these scenarios in the model to find survey and tree removal strategies that minimize the expected program cost while satisfying the safety rule. We also identify strategies that reduce the risk of very high program costs. Our results reveal two alternative strategies: (i) delimiting surveys and subsequent tree removal based on the surveys' outcomes, or (ii) preventive host tree removal without referring to delimiting surveys. The second strategy is more likely to meet the stated objectives when the capacity to detect an invader is low or the aspirations to eradicate it are high. Our results provide practical guidelines to identify the best management strategy given aspirational targets for eradication and spending. PMID:28759584
Golkhou, V; Parnianpour, M; Lucas, C
2004-01-01
In this study, we consider the role of multisensor data fusion in neuromuscular control using an actor-critic reinforcement learning method. The model we use is a single link system actuated by a pair of muscles that are excited with alpha and gamma signals. Various physiological sensor information such as proprioception, spindle sensors, and Golgi tendon organs have been integrated to achieve an oscillatory movement with variable amplitude and frequency, while achieving a stable movement with minimum metabolic cost and coactivation. The system is highly nonlinear in all its physical and physiological attributes. Transmission delays are included in the afferent and efferent neural paths to account for a more accurate representation of the reflex loops. This paper proposes a reinforcement learning method with an Actor-Critic architecture instead of middle and low level of central nervous system (CNS). The Actor in this structure is a two layer feedforward neural network and the Critic is a model of the cerebellum. The Critic is trained by the State-Action-Reward-State-Action (SARSA) method. The Critic will train the Actor by supervisory learning based on previous experiences. The reinforcement signal in SARSA is evaluated based on available alternatives concerning the concept of multisensor data fusion. The effectiveness and the biological plausibility of the present model are demonstrated by several simulations. The system showed excellent tracking capability when we integrated the available sensor information. Addition of a penalty for activation of muscles resulted in much lower muscle coactivation while keeping the movement stable.
Constructing three emotion knowledge tests from the invariant measurement approach
Prieto, Gerardo; Burin, Debora I.
2017-01-01
Background Psychological constructionist models like the Conceptual Act Theory (CAT) postulate that complex states such as emotions are composed of basic psychological ingredients that are more clearly respected by the brain than basic emotions. The objective of this study was the construction and initial validation of Emotion Knowledge measures from the CAT frame by means of an invariant measurement approach, the Rasch Model (RM). Psychological distance theory was used to inform item generation. Methods Three EK tests—emotion vocabulary (EV), close emotional situations (CES) and far emotional situations (FES)—were constructed and tested with the RM in a community sample of 100 females and 100 males (age range: 18–65), both separately and conjointly. Results It was corroborated that data-RM fit was sufficient. Then, the effect of type of test and emotion on Rasch-modelled item difficulty was tested. Significant effects of emotion on EK item difficulty were found, but the only statistically significant difference was that between “happiness” and the remaining emotions; neither type of test, nor interaction effects on EK item difficulty were statistically significant. The testing of gender differences was carried out after corroborating that differential item functioning (DIF) would not be a plausible alternative hypothesis for the results. No statistically significant sex-related differences were found out in EV, CES, FES, or total EK. However, the sign of d indicate that female participants were consistently better than male ones, a result that will be of interest for future meta-analyses. Discussion The three EK tests are ready to be used as components of a higher-level measurement process. PMID:28929013
Integrative oncology: an overview.
Deng, Gary; Cassileth, Barrie
2014-01-01
Integrative oncology, the diagnosis-specific field of integrative medicine, addresses symptom control with nonpharmacologic therapies. Known commonly as "complementary therapies" these are evidence-based adjuncts to mainstream care that effectively control physical and emotional symptoms, enhance physical and emotional strength, and provide patients with skills enabling them to help themselves throughout and following mainstream cancer treatment. Integrative or complementary therapies are rational and noninvasive. They have been subjected to study to determine their value, to document the problems they ameliorate, and to define the circumstances under which such therapies are beneficial. Conversely, "alternative" therapies typically are promoted literally as such; as actual antitumor treatments. They lack biologic plausibility and scientific evidence of safety and efficacy. Many are outright fraudulent. Conflating these two very different categories by use of the convenient acronym "CAM," for "complementary and alternative therapies," confuses the issue and does a substantial disservice to patients and medical professionals. Complementary and integrative modalities have demonstrated safety value and benefits. If the same were true for "alternatives," they would not be "alternatives." Rather, they would become part of mainstream cancer care. This manuscript explores the medical and sociocultural context of interest in integrative oncology as well as in "alternative" therapies, reviews commonly-asked patient questions, summarizes research results in both categories, and offers recommendations to help guide patients and family members through what is often a difficult maze. Combining complementary therapies with mainstream oncology care to address patients' physical, psychologic and spiritual needs constitutes the practice of integrative oncology. By recommending nonpharmacologic modalities that reduce symptom burden and improve quality of life, physicians also enable patients to play a role in their care. Critical for most patients, this also improves the physician-patient relationship, the quality of cancer care, and the well-being of patients and their families.
Plausible rice yield losses under future climate warming.
Zhao, Chuang; Piao, Shilong; Wang, Xuhui; Huang, Yao; Ciais, Philippe; Elliott, Joshua; Huang, Mengtian; Janssens, Ivan A; Li, Tao; Lian, Xu; Liu, Yongwen; Müller, Christoph; Peng, Shushi; Wang, Tao; Zeng, Zhenzhong; Peñuelas, Josep
2016-12-19
Rice is the staple food for more than 50% of the world's population 1-3 . Reliable prediction of changes in rice yield is thus central for maintaining global food security. This is an extraordinary challenge. Here, we compare the sensitivity of rice yield to temperature increase derived from field warming experiments and three modelling approaches: statistical models, local crop models and global gridded crop models. Field warming experiments produce a substantial rice yield loss under warming, with an average temperature sensitivity of -5.2 ± 1.4% K -1 . Local crop models give a similar sensitivity (-6.3 ± 0.4% K -1 ), but statistical and global gridded crop models both suggest less negative impacts of warming on yields (-0.8 ± 0.3% and -2.4 ± 3.7% K -1 , respectively). Using data from field warming experiments, we further propose a conditional probability approach to constrain the large range of global gridded crop model results for the future yield changes in response to warming by the end of the century (from -1.3% to -9.3% K -1 ). The constraint implies a more negative response to warming (-8.3 ± 1.4% K -1 ) and reduces the spread of the model ensemble by 33%. This yield reduction exceeds that estimated by the International Food Policy Research Institute assessment (-4.2 to -6.4% K -1 ) (ref. 4). Our study suggests that without CO 2 fertilization, effective adaptation and genetic improvement, severe rice yield losses are plausible under intensive climate warming scenarios.
Application of plausible reasoning to AI-based control systems
NASA Technical Reports Server (NTRS)
Berenji, Hamid; Lum, Henry, Jr.
1987-01-01
Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.
The influence of spatiotemporal structure of noisy stimuli in decision making.
Insabato, Andrea; Dempere-Marco, Laura; Pannunzi, Mario; Deco, Gustavo; Romo, Ranulfo
2014-04-01
Decision making is a process of utmost importance in our daily lives, the study of which has been receiving notable attention for decades. Nevertheless, the neural mechanisms underlying decision making are still not fully understood. Computational modeling has revealed itself as a valuable asset to address some of the fundamental questions. Biophysically plausible models, in particular, are useful in bridging the different levels of description that experimental studies provide, from the neural spiking activity recorded at the cellular level to the performance reported at the behavioral level. In this article, we have reviewed some of the recent progress made in the understanding of the neural mechanisms that underlie decision making. We have performed a critical evaluation of the available results and address, from a computational perspective, aspects of both experimentation and modeling that so far have eluded comprehension. To guide the discussion, we have selected a central theme which revolves around the following question: how does the spatiotemporal structure of sensory stimuli affect the perceptual decision-making process? This question is a timely one as several issues that still remain unresolved stem from this central theme. These include: (i) the role of spatiotemporal input fluctuations in perceptual decision making, (ii) how to extend the current results and models derived from two-alternative choice studies to scenarios with multiple competing evidences, and (iii) to establish whether different types of spatiotemporal input fluctuations affect decision-making outcomes in distinctive ways. And although we have restricted our discussion mostly to visual decisions, our main conclusions are arguably generalizable; hence, their possible extension to other sensory modalities is one of the points in our discussion.
NASA Astrophysics Data System (ADS)
Kim, Bongjae; Khmelevskyi, Sergii; Mazin, Igor I.; Agterberg, Daniel F.; Franchini, Cesare
2017-07-01
Sr2RuO4 is the best candidate for spin-triplet superconductivity, an unusual and elusive superconducting state of fundamental importance. In the last three decades, Sr2RuO4 has been very carefully studied and despite its apparent simplicity when compared with strongly correlated high-Tc cuprates, for which the pairing symmetry is understood, there is no scenario that can explain all the major experimental observations, a conundrum that has generated tremendous interest. Here, we present a density-functional-based analysis of magnetic interactions in Sr2RuO4 and discuss the role of magnetic anisotropy in its unconventional superconductivity. Our goal is twofold. First, we access the possibility of the superconducting order parameter rotation in an external magnetic field of 200 Oe, and conclude that the spin-orbit interaction in this material is several orders of magnitude too strong to be consistent with this hypothesis. Thus, the observed invariance of the Knight shift across Tc has no plausible explanation, and casts doubt on using the Knight shift as an ultimate litmus paper for the pairing symmetry. Second, we propose a quantitative double-exchange-like model for combining itinerant fermions with an anisotropic Heisenberg magnetic Hamiltonian. This model is complementary to the Hubbard-model-based calculations published so far, and forms an alternative framework for exploring superconducting symmetry in Sr2RuO4. As an example, we use this model to analyze the degeneracy between various p-triplet states in the simplest mean-field approximation, and show that it splits into a single and two doublets with the ground state defined by the competition between the "Ising" and "compass" anisotropic terms.
The Influence of Spatiotemporal Structure of Noisy Stimuli in Decision Making
Deco, Gustavo; Romo, Ranulfo
2014-01-01
Decision making is a process of utmost importance in our daily lives, the study of which has been receiving notable attention for decades. Nevertheless, the neural mechanisms underlying decision making are still not fully understood. Computational modeling has revealed itself as a valuable asset to address some of the fundamental questions. Biophysically plausible models, in particular, are useful in bridging the different levels of description that experimental studies provide, from the neural spiking activity recorded at the cellular level to the performance reported at the behavioral level. In this article, we have reviewed some of the recent progress made in the understanding of the neural mechanisms that underlie decision making. We have performed a critical evaluation of the available results and address, from a computational perspective, aspects of both experimentation and modeling that so far have eluded comprehension. To guide the discussion, we have selected a central theme which revolves around the following question: how does the spatiotemporal structure of sensory stimuli affect the perceptual decision-making process? This question is a timely one as several issues that still remain unresolved stem from this central theme. These include: (i) the role of spatiotemporal input fluctuations in perceptual decision making, (ii) how to extend the current results and models derived from two-alternative choice studies to scenarios with multiple competing evidences, and (iii) to establish whether different types of spatiotemporal input fluctuations affect decision-making outcomes in distinctive ways. And although we have restricted our discussion mostly to visual decisions, our main conclusions are arguably generalizable; hence, their possible extension to other sensory modalities is one of the points in our discussion. PMID:24743140
NASA Astrophysics Data System (ADS)
Fullea, J.; Fernàndez, M.; Zeyen, H.; Vergés, J.
2007-02-01
We present a method based on the combination of elevation and geoid anomaly data together with thermal field to map crustal and lithospheric thickness. The main assumptions are local isostasy and a four-layered model composed of crust, lithospheric mantle, sea water and the asthenosphere. We consider a linear density gradient for the crust and a temperature dependent density for the lithospheric mantle. We perform sensitivity tests to evaluate the effect of the variation of the model parameters and the influence of RMS error of elevation and geoid anomaly databases. The application of this method to the Gibraltar Arc System, Atlas Mountains and adjacent zones reveals the presence of a lithospheric thinning zone, SW-NE oriented. This zone affects the High and Middle Atlas and extends from the Canary Islands to the eastern Alboran Basin and is probably linked with a similarly trending zone of thick lithosphere constituting the western Betics, eastern Rif, Rharb Basin, and Gulf of Cadiz. A number of different, even mutually opposite, geodynamic models have been proposed to explain the origin and evolution of the study area. Our results suggest that a plausible slab-retreating model should incorporate tear and asymmetric roll-back of the subducting slab to fit the present-day observed lithosphere geometry. In this context, the lithospheric thinning would be caused by lateral asthenospheric flow. An alternative mechanism responsible for lithospheric thinning is the presence of a hot magmatic reservoir derived from a deep ancient plume centred in the Canary Island, and extending as far as Central Europe.