What Can Causal Networks Tell Us about Metabolic Pathways?
Blair, Rachael Hageman; Kliebenstein, Daniel J.; Churchill, Gary A.
2012-01-01
Graphical models describe the linear correlation structure of data and have been used to establish causal relationships among phenotypes in genetic mapping populations. Data are typically collected at a single point in time. Biological processes on the other hand are often non-linear and display time varying dynamics. The extent to which graphical models can recapitulate the architecture of an underlying biological processes is not well understood. We consider metabolic networks with known stoichiometry to address the fundamental question: “What can causal networks tell us about metabolic pathways?”. Using data from an Arabidopsis BaySha population and simulated data from dynamic models of pathway motifs, we assess our ability to reconstruct metabolic pathways using graphical models. Our results highlight the necessity of non-genetic residual biological variation for reliable inference. Recovery of the ordering within a pathway is possible, but should not be expected. Causal inference is sensitive to subtle patterns in the correlation structure that may be driven by a variety of factors, which may not emphasize the substrate-product relationship. We illustrate the effects of metabolic pathway architecture, epistasis and stochastic variation on correlation structure and graphical model-derived networks. We conclude that graphical models should be interpreted cautiously, especially if the implied causal relationships are to be used in the design of intervention strategies. PMID:22496633
Beyond Markov: Accounting for independence violations in causal reasoning.
Rehder, Bob
2018-06-01
Although many theories of causal cognition are based on causal graphical models, a key property of such models-the independence relations stipulated by the Markov condition-is routinely violated by human reasoners. This article presents three new accounts of those independence violations, accounts that share the assumption that people's understanding of the correlational structure of data generated from a causal graph differs from that stipulated by causal graphical model framework. To distinguish these models, experiments assessed how people reason with causal graphs that are larger than those tested in previous studies. A traditional common cause network (Y 1 ←X→Y 2 ) was extended so that the effects themselves had effects (Z 1 ←Y 1 ←X→Y 2 →Z 2 ). A traditional common effect network (Y 1 →X←Y 2 ) was extended so that the causes themselves had causes (Z 1 →Y 1 →X←Y 2 ←Z 2 ). Subjects' inferences were most consistent with the beta-Q model in which consistent states of the world-those in which variables are either mostly all present or mostly all absent-are viewed as more probable than stipulated by the causal graphical model framework. Substantial variability in subjects' inferences was also observed, with the result that substantial minorities of subjects were best fit by one of the other models (the dual prototype or a leaky gate models). The discrepancy between normative and human causal cognition stipulated by these models is foundational in the sense that they locate the error not in people's causal reasoning but rather in their causal representations. As a result, they are applicable to any cognitive theory grounded in causal graphical models, including theories of analogy, learning, explanation, categorization, decision-making, and counterfactual reasoning. Preliminary evidence that independence violations indeed generalize to other judgment types is presented. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Rehder, Bob
2017-01-01
This article assesses how people reason with categories whose features are related in causal cycles. Whereas models based on causal graphical models (CGMs) have enjoyed success modeling category-based judgments as well as a number of other cognitive phenomena, CGMs are only able to represent causal structures that are acyclic. A number of new…
The Specification of Causal Models with Tetrad IV: A Review
ERIC Educational Resources Information Center
Landsheer, J. A.
2010-01-01
Tetrad IV is a program designed for the specification of causal models. It is specifically designed to search for causal relations, but also offers the possibility to estimate the parameters of a structural equation model. It offers a remarkable graphical user interface, which facilitates building, evaluating, and searching for causal models. The…
Structure and Strength in Causal Induction
ERIC Educational Resources Information Center
Griffiths, Thomas L.; Tenenbaum, Joshua B.
2005-01-01
We present a framework for the rational analysis of elemental causal induction--learning about the existence of a relationship between a single cause and effect--based upon causal graphical models. This framework makes precise the distinction between causal structure and causal strength: the difference between asking whether a causal relationship…
Manu, Patrick A; Ankrah, Nii A; Proverbs, David G; Suresh, Subashini
2012-09-01
Construction project features (CPFs) are organisational, physical and operational attributes that characterise construction projects. Although previous studies have examined the accident causal influence of CPFs, the multi-causal attribute of this causal phenomenon still remain elusive and thus requires further investigation. Aiming to shed light on this facet of the accident causal phenomenon of CPFs, this study examines relevant literature and crystallises the attained insight of the multi-causal attribute by a graphical model which is subsequently operationalised by a derived mathematical risk expression that offers a systematic approach for evaluating the potential of CPFs to cause harm and consequently their health and safety (H&S) risk implications. The graphical model and the risk expression put forth by the study thus advance current understanding of the accident causal phenomenon of CPFs and they present an opportunity for project participants to manage the H&S risk associated with CPFs from the early stages of project procurement. Copyright © 2011 Elsevier Ltd. All rights reserved.
Application of dynamic uncertain causality graph in spacecraft fault diagnosis: Logic cycle
NASA Astrophysics Data System (ADS)
Yao, Quanying; Zhang, Qin; Liu, Peng; Yang, Ping; Zhu, Ma; Wang, Xiaochen
2017-04-01
Intelligent diagnosis system are applied to fault diagnosis in spacecraft. Dynamic Uncertain Causality Graph (DUCG) is a new probability graphic model with many advantages. In the knowledge expression of spacecraft fault diagnosis, feedback among variables is frequently encountered, which may cause directed cyclic graphs (DCGs). Probabilistic graphical models (PGMs) such as bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning, but BN does not allow DCGs. In this paper, DUGG is applied to fault diagnosis in spacecraft: introducing the inference algorithm for the DUCG to deal with feedback. Now, DUCG has been tested in 16 typical faults with 100% diagnosis accuracy.
A complete graphical criterion for the adjustment formula in mediation analysis.
Shpitser, Ilya; VanderWeele, Tyler J
2011-03-04
Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.
Causal tapestries for psychology and physics.
Sulis, William H
2012-04-01
Archetypal dynamics is a formal approach to the modeling of information flow in complex systems used to study emergence. It is grounded in the Fundamental Triad of realisation (system), interpretation (archetype) and representation (formal model). Tapestries play a fundamental role in the framework of archetypal dynamics as a formal representational system. They represent information flow by means of multi layered, recursive, interlinked graphical structures that express both geometry (form or sign) and logic (semantics). This paper presents a detailed mathematical description of a specific tapestry model, the causal tapestry, selected for use in describing behaving systems such as appear in psychology and physics from the standpoint of Process Theory. Causal tapestries express an explicit Lorentz invariant transient now generated by means of a reality game. Observables are represented by tapestry informons while subjective or hidden components (for example intellectual and emotional processes) are incorporated into the reality game that determines the tapestry dynamics. As a specific example, we formulate a random graphical dynamical system using causal tapestries.
ERIC Educational Resources Information Center
Sobel, David M.; Kirkham, Natasha Z.
2007-01-01
A fundamental assumption of the causal graphical model framework is the Markov assumption, which posits that learners can discriminate between two events that are dependent because of a direct causal relation between them and two events that are independent conditional on the value of another event(s). Sobel and Kirkham (2006) demonstrated that…
[Causal analysis approaches in epidemiology].
Dumas, O; Siroux, V; Le Moual, N; Varraso, R
2014-02-01
Epidemiological research is mostly based on observational studies. Whether such studies can provide evidence of causation remains discussed. Several causal analysis methods have been developed in epidemiology. This paper aims at presenting an overview of these methods: graphical models, path analysis and its extensions, and models based on the counterfactual approach, with a special emphasis on marginal structural models. Graphical approaches have been developed to allow synthetic representations of supposed causal relationships in a given problem. They serve as qualitative support in the study of causal relationships. The sufficient-component cause model has been developed to deal with the issue of multicausality raised by the emergence of chronic multifactorial diseases. Directed acyclic graphs are mostly used as a visual tool to identify possible confounding sources in a study. Structural equations models, the main extension of path analysis, combine a system of equations and a path diagram, representing a set of possible causal relationships. They allow quantifying direct and indirect effects in a general model in which several relationships can be tested simultaneously. Dynamic path analysis further takes into account the role of time. The counterfactual approach defines causality by comparing the observed event and the counterfactual event (the event that would have been observed if, contrary to the fact, the subject had received a different exposure than the one he actually received). This theoretical approach has shown limits of traditional methods to address some causality questions. In particular, in longitudinal studies, when there is time-varying confounding, classical methods (regressions) may be biased. Marginal structural models have been developed to address this issue. In conclusion, "causal models", though they were developed partly independently, are based on equivalent logical foundations. A crucial step in the application of these models is the formulation of causal hypotheses, which will be a basis for all methodological choices. Beyond this step, statistical analysis tools recently developed offer new possibilities to delineate complex relationships, in particular in life course epidemiology. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Graphical Models for Quasi-Experimental Designs
ERIC Educational Resources Information Center
Steiner, Peter M.; Kim, Yongnam; Hall, Courtney E.; Su, Dan
2017-01-01
Randomized controlled trials (RCTs) and quasi-experimental designs like regression discontinuity (RD) designs, instrumental variable (IV) designs, and matching and propensity score (PS) designs are frequently used for inferring causal effects. It is well known that the features of these designs facilitate the identification of a causal estimand…
Graphical Means for Inspecting Qualitative Models of System Behaviour
ERIC Educational Resources Information Center
Bouwer, Anders; Bredeweg, Bert
2010-01-01
This article presents the design and evaluation of a tool for inspecting conceptual models of system behaviour. The basis for this research is the Garp framework for qualitative simulation. This framework includes modelling primitives, such as entities, quantities and causal dependencies, which are combined into model fragments and scenarios.…
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
Causality, mediation and time: a dynamic viewpoint
Aalen, Odd O; Røysland, Kjetil; Gran, Jon Michael; Ledergerber, Bruno
2012-01-01
Summary. Time dynamics are often ignored in causal modelling. Clearly, causality must operate in time and we show how this corresponds to a mechanistic, or system, understanding of causality. The established counterfactual definitions of direct and indirect effects depend on an ability to manipulate the mediator which may not hold in practice, and we argue that a mechanistic view may be better. Graphical representations based on local independence graphs and dynamic path analysis are used to facilitate communication as well as providing an overview of the dynamic relations ‘at a glance’. The relationship between causality as understood in a mechanistic and in an interventionist sense is discussed. An example using data from the Swiss HIV Cohort Study is presented. PMID:23193356
Causal discovery in the geosciences-Using synthetic data to learn how to interpret results
NASA Astrophysics Data System (ADS)
Ebert-Uphoff, Imme; Deng, Yi
2017-02-01
Causal discovery algorithms based on probabilistic graphical models have recently emerged in geoscience applications for the identification and visualization of dynamical processes. The key idea is to learn the structure of a graphical model from observed spatio-temporal data, thus finding pathways of interactions in the observed physical system. Studying those pathways allows geoscientists to learn subtle details about the underlying dynamical mechanisms governing our planet. Initial studies using this approach on real-world atmospheric data have shown great potential for scientific discovery. However, in these initial studies no ground truth was available, so that the resulting graphs have been evaluated only by whether a domain expert thinks they seemed physically plausible. The lack of ground truth is a typical problem when using causal discovery in the geosciences. Furthermore, while most of the connections found by this method match domain knowledge, we encountered one type of connection for which no explanation was found. To address both of these issues we developed a simulation framework that generates synthetic data of typical atmospheric processes (advection and diffusion). Applying the causal discovery algorithm to the synthetic data allowed us (1) to develop a better understanding of how these physical processes appear in the resulting connectivity graphs, and thus how to better interpret such connectivity graphs when obtained from real-world data; (2) to solve the mystery of the previously unexplained connections.
Interactions of information transfer along separable causal paths
NASA Astrophysics Data System (ADS)
Jiang, Peishi; Kumar, Praveen
2018-04-01
Complex systems arise as a result of interdependences between multiple variables, whose causal interactions can be visualized in a time-series graph. Transfer entropy and information partitioning approaches have been used to characterize such dependences. However, these approaches capture net information transfer occurring through a multitude of pathways involved in the interaction and as a result mask our ability to discern the causal interaction within a subgraph of interest through specific pathways. We build on recent developments of momentary information transfer along causal paths proposed by Runge [Phys. Rev. E 92, 062829 (2015), 10.1103/PhysRevE.92.062829] to develop a framework for quantifying information partitioning along separable causal paths. Momentary information transfer along causal paths captures the amount of information transfer between any two variables lagged at two specific points in time. Our approach expands this concept to characterize the causal interaction in terms of synergistic, unique, and redundant information transfer through separable causal paths. Through a graphical model, we analyze the impact of the separable and nonseparable causal paths and the causality structure embedded in the graph as well as the noise effect on information partitioning by using synthetic data generated from two coupled logistic equation models. Our approach can provide a valuable reference for an autonomous information partitioning along separable causal paths which form a causal subgraph influencing a target.
Towards graphical causal structures
NASA Astrophysics Data System (ADS)
Paulsson, K. Johan
2012-12-01
Folowing recent work by R. Spekkens, M. Leifer and B. Coecke we investigate causal settings in a limited categorical version of the conditional density operator formalism. We particularly show how the compact structure for causal and acausal settings apply on the measurements of stabiliser theory.
Causal inference, probability theory, and graphical insights.
Baker, Stuart G
2013-11-10
Causal inference from observational studies is a fundamental topic in biostatistics. The causal graph literature typically views probability theory as insufficient to express causal concepts in observational studies. In contrast, the view here is that probability theory is a desirable and sufficient basis for many topics in causal inference for the following two reasons. First, probability theory is generally more flexible than causal graphs: Besides explaining such causal graph topics as M-bias (adjusting for a collider) and bias amplification and attenuation (when adjusting for instrumental variable), probability theory is also the foundation of the paired availability design for historical controls, which does not fit into a causal graph framework. Second, probability theory is the basis for insightful graphical displays including the BK-Plot for understanding Simpson's paradox with a binary confounder, the BK2-Plot for understanding bias amplification and attenuation in the presence of an unobserved binary confounder, and the PAD-Plot for understanding the principal stratification component of the paired availability design. Published 2013. This article is a US Government work and is in the public domain in the USA.
Causal inference in biology networks with integrated belief propagation.
Chang, Rui; Karr, Jonathan R; Schadt, Eric E
2015-01-01
Inferring causal relationships among molecular and higher order phenotypes is a critical step in elucidating the complexity of living systems. Here we propose a novel method for inferring causality that is no longer constrained by the conditional dependency arguments that limit the ability of statistical causal inference methods to resolve causal relationships within sets of graphical models that are Markov equivalent. Our method utilizes Bayesian belief propagation to infer the responses of perturbation events on molecular traits given a hypothesized graph structure. A distance measure between the inferred response distribution and the observed data is defined to assess the 'fitness' of the hypothesized causal relationships. To test our algorithm, we infer causal relationships within equivalence classes of gene networks in which the form of the functional interactions that are possible are assumed to be nonlinear, given synthetic microarray and RNA sequencing data. We also apply our method to infer causality in real metabolic network with v-structure and feedback loop. We show that our method can recapitulate the causal structure and recover the feedback loop only from steady-state data which conventional method cannot.
Graphical Models for Quasi-Experimental Designs
ERIC Educational Resources Information Center
Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan
2016-01-01
Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…
A review of causal inference for biomedical informatics
Kleinberg, Samantha; Hripcsak, George
2011-01-01
Causality is an important concept throughout the health sciences and is particularly vital for informatics work such as finding adverse drug events or risk factors for disease using electronic health records. While philosophers and scientists working for centuries on formalizing what makes something a cause have not reached a consensus, new methods for inference show that we can make progress in this area in many practical cases. This article reviews core concepts in understanding and identifying causality and then reviews current computational methods for inference and explanation, focusing on inference from large-scale observational data. While the problem is not fully solved, we show that graphical models and Granger causality provide useful frameworks for inference and that a more recent approach based on temporal logic addresses some of the limitations of these methods. PMID:21782035
Counterfactual Graphical Models for Longitudinal Mediation Analysis with Unobserved Confounding
ERIC Educational Resources Information Center
Shpitser, Ilya
2013-01-01
Questions concerning mediated causal effects are of great interest in psychology, cognitive science, medicine, social science, public health, and many other disciplines. For instance, about 60% of recent papers published in leading journals in social psychology contain at least one mediation test (Rucker, Preacher, Tormala, & Petty, 2011).…
The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.
Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny
2018-04-16
We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.
Causal mapping of emotion networks in the human brain: Framework and initial findings.
Dubois, Julien; Oya, Hiroyuki; Tyszka, J Michael; Howard, Matthew; Eberhardt, Frederick; Adolphs, Ralph
2017-11-13
Emotions involve many cortical and subcortical regions, prominently including the amygdala. It remains unknown how these multiple network components interact, and it remains unknown how they cause the behavioral, autonomic, and experiential effects of emotions. Here we describe a framework for combining a novel technique, concurrent electrical stimulation with fMRI (es-fMRI), together with a novel analysis, inferring causal structure from fMRI data (causal discovery). We outline a research program for investigating human emotion with these new tools, and provide initial findings from two large resting-state datasets as well as case studies in neurosurgical patients with electrical stimulation of the amygdala. The overarching goal is to use causal discovery methods on fMRI data to infer causal graphical models of how brain regions interact, and then to further constrain these models with direct stimulation of specific brain regions and concurrent fMRI. We conclude by discussing limitations and future extensions. The approach could yield anatomical hypotheses about brain connectivity, motivate rational strategies for treating mood disorders with deep brain stimulation, and could be extended to animal studies that use combined optogenetic fMRI. Copyright © 2017 Elsevier Ltd. All rights reserved.
Is Age Just a Number? Cognitive Reserve as a Predictor of Divergent Thinking in Late Adulthood
ERIC Educational Resources Information Center
Meléndez, Juan C.; Alfonso-Benlliure, Vicente; Mayordomo, Teresa; Sales, Alicia
2016-01-01
The purpose of the study was to test a model of causal relationships among cognitive reserve (CR), personality variables such as Neuroticism and Openness to experience, and divergent thinking (DT), independently evaluating performance in different domains (verbal and graphic). It was hypothesized that CR, Openness, and Neuroticism would each…
CAUSAL INFERENCE WITH A GRAPHICAL HIERARCHY OF INTERVENTIONS
Shpitser, Ilya; Tchetgen, Eric Tchetgen
2017-01-01
Identifying causal parameters from observational data is fraught with subtleties due to the issues of selection bias and confounding. In addition, more complex questions of interest, such as effects of treatment on the treated and mediated effects may not always be identified even in data where treatment assignment is known and under investigator control, or may be identified under one causal model but not another. Increasingly complex effects of interest, coupled with a diversity of causal models in use resulted in a fragmented view of identification. This fragmentation makes it unnecessarily difficult to determine if a given parameter is identified (and in what model), and what assumptions must hold for this to be the case. This, in turn, complicates the development of estimation theory and sensitivity analysis procedures. In this paper, we give a unifying view of a large class of causal effects of interest, including novel effects not previously considered, in terms of a hierarchy of interventions, and show that identification theory for this large class reduces to an identification theory of random variables under interventions from this hierarchy. Moreover, we show that one type of intervention in the hierarchy is naturally associated with queries identified under the Finest Fully Randomized Causally Interpretable Structure Tree Graph (FFRCISTG) model of Robins (via the extended g-formula), and another is naturally associated with queries identified under the Non-Parametric Structural Equation Model with Independent Errors (NPSEM-IE) of Pearl, via a more general functional we call the edge g-formula. Our results motivate the study of estimation theory for the edge g-formula, since we show it arises both in mediation analysis, and in settings where treatment assignment has unobserved causes, such as models associated with Pearl’s front-door criterion. PMID:28919652
CAUSAL INFERENCE WITH A GRAPHICAL HIERARCHY OF INTERVENTIONS.
Shpitser, Ilya; Tchetgen, Eric Tchetgen
2016-12-01
Identifying causal parameters from observational data is fraught with subtleties due to the issues of selection bias and confounding. In addition, more complex questions of interest, such as effects of treatment on the treated and mediated effects may not always be identified even in data where treatment assignment is known and under investigator control, or may be identified under one causal model but not another. Increasingly complex effects of interest, coupled with a diversity of causal models in use resulted in a fragmented view of identification. This fragmentation makes it unnecessarily difficult to determine if a given parameter is identified (and in what model), and what assumptions must hold for this to be the case. This, in turn, complicates the development of estimation theory and sensitivity analysis procedures. In this paper, we give a unifying view of a large class of causal effects of interest, including novel effects not previously considered, in terms of a hierarchy of interventions, and show that identification theory for this large class reduces to an identification theory of random variables under interventions from this hierarchy. Moreover, we show that one type of intervention in the hierarchy is naturally associated with queries identified under the Finest Fully Randomized Causally Interpretable Structure Tree Graph (FFRCISTG) model of Robins (via the extended g-formula), and another is naturally associated with queries identified under the Non-Parametric Structural Equation Model with Independent Errors (NPSEM-IE) of Pearl, via a more general functional we call the edge g-formula. Our results motivate the study of estimation theory for the edge g-formula, since we show it arises both in mediation analysis, and in settings where treatment assignment has unobserved causes, such as models associated with Pearl's front-door criterion.
Intelligent diagnosis of jaundice with dynamic uncertain causality graph model.
Hao, Shao-Rui; Geng, Shi-Chao; Fan, Lin-Xiao; Chen, Jia-Jia; Zhang, Qin; Li, Lan-Juan
2017-05-01
Jaundice is a common and complex clinical symptom potentially occurring in hepatology, general surgery, pediatrics, infectious diseases, gynecology, and obstetrics, and it is fairly difficult to distinguish the cause of jaundice in clinical practice, especially for general practitioners in less developed regions. With collaboration between physicians and artificial intelligence engineers, a comprehensive knowledge base relevant to jaundice was created based on demographic information, symptoms, physical signs, laboratory tests, imaging diagnosis, medical histories, and risk factors. Then a diagnostic modeling and reasoning system using the dynamic uncertain causality graph was proposed. A modularized modeling scheme was presented to reduce the complexity of model construction, providing multiple perspectives and arbitrary granularity for disease causality representations. A "chaining" inference algorithm and weighted logic operation mechanism were employed to guarantee the exactness and efficiency of diagnostic reasoning under situations of incomplete and uncertain information. Moreover, the causal interactions among diseases and symptoms intuitively demonstrated the reasoning process in a graphical manner. Verification was performed using 203 randomly pooled clinical cases, and the accuracy was 99.01% and 84.73%, respectively, with or without laboratory tests in the model. The solutions were more explicable and convincing than common methods such as Bayesian Networks, further increasing the objectivity of clinical decision-making. The promising results indicated that our model could be potentially used in intelligent diagnosis and help decrease public health expenditure.
Intelligent diagnosis of jaundice with dynamic uncertain causality graph model*
Hao, Shao-rui; Geng, Shi-chao; Fan, Lin-xiao; Chen, Jia-jia; Zhang, Qin; Li, Lan-juan
2017-01-01
Jaundice is a common and complex clinical symptom potentially occurring in hepatology, general surgery, pediatrics, infectious diseases, gynecology, and obstetrics, and it is fairly difficult to distinguish the cause of jaundice in clinical practice, especially for general practitioners in less developed regions. With collaboration between physicians and artificial intelligence engineers, a comprehensive knowledge base relevant to jaundice was created based on demographic information, symptoms, physical signs, laboratory tests, imaging diagnosis, medical histories, and risk factors. Then a diagnostic modeling and reasoning system using the dynamic uncertain causality graph was proposed. A modularized modeling scheme was presented to reduce the complexity of model construction, providing multiple perspectives and arbitrary granularity for disease causality representations. A “chaining” inference algorithm and weighted logic operation mechanism were employed to guarantee the exactness and efficiency of diagnostic reasoning under situations of incomplete and uncertain information. Moreover, the causal interactions among diseases and symptoms intuitively demonstrated the reasoning process in a graphical manner. Verification was performed using 203 randomly pooled clinical cases, and the accuracy was 99.01% and 84.73%, respectively, with or without laboratory tests in the model. The solutions were more explicable and convincing than common methods such as Bayesian Networks, further increasing the objectivity of clinical decision-making. The promising results indicated that our model could be potentially used in intelligent diagnosis and help decrease public health expenditure. PMID:28471111
Data-driven confounder selection via Markov and Bayesian networks.
Häggström, Jenny
2018-06-01
To unbiasedly estimate a causal effect on an outcome unconfoundedness is often assumed. If there is sufficient knowledge on the underlying causal structure then existing confounder selection criteria can be used to select subsets of the observed pretreatment covariates, X, sufficient for unconfoundedness, if such subsets exist. Here, estimation of these target subsets is considered when the underlying causal structure is unknown. The proposed method is to model the causal structure by a probabilistic graphical model, for example, a Markov or Bayesian network, estimate this graph from observed data and select the target subsets given the estimated graph. The approach is evaluated by simulation both in a high-dimensional setting where unconfoundedness holds given X and in a setting where unconfoundedness only holds given subsets of X. Several common target subsets are investigated and the selected subsets are compared with respect to accuracy in estimating the average causal effect. The proposed method is implemented with existing software that can easily handle high-dimensional data, in terms of large samples and large number of covariates. The results from the simulation study show that, if unconfoundedness holds given X, this approach is very successful in selecting the target subsets, outperforming alternative approaches based on random forests and LASSO, and that the subset estimating the target subset containing all causes of outcome yields smallest MSE in the average causal effect estimation. © 2017, The International Biometric Society.
Fantini, Bernardino
2006-01-01
From its first proposal, the Central Dogma had a graphical form, complete with arrows of different types, and this form quickly became its standard presentation. In different scientific contexts, arrows have different meanings and in this particular case the arrows indicated the flow of information among different macromolecules. A deeper analysis illustrates that the arrows also imply a causal statement, directly connected to the causal role of genetic information. The author suggests a distinction between two different kinds of causal links, defined as 'physical causality' and 'biological determination', both implied in the production of biological specificity.
THE CAUSAL ANALYSIS / DIAGNOSIS DECISION ...
CADDIS is an on-line decision support system that helps investigators in the regions, states and tribes find, access, organize, use and share information to produce causal evaluations in aquatic systems. It is based on the US EPA's Stressor Identification process which is a formal method for identifying causes of impairments in aquatic systems. CADDIS 2007 increases access to relevant information useful for causal analysis and provides methods and tools that practitioners can use to analyze their own data. The new Candidate Cause section provides overviews of commonly encountered causes of impairments to aquatic systems: metals, sediments, nutrients, flow alteration, temperature, ionic strength, and low dissolved oxygen. CADDIS includes new Conceptual Models that illustrate the relationships from sources to stressors to biological effects. An Interactive Conceptual Model for phosphorus links the diagram with supporting literature citations. The new Analyzing Data section helps practitioners analyze their data sets and interpret and use those results as evidence within the USEPA causal assessment process. Downloadable tools include a graphical user interface statistical package (CADStat), and programs for use with the freeware R statistical package, and a Microsoft Excel template. These tools can be used to quantify associations between causes and biological impairments using innovative methods such as species-sensitivity distributions, biological inferenc
A causal examination of the effects of confounding factors on multimetric indices
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Mitchell, Brian R.; Guntenspergen, Glenn R.
2013-01-01
The development of multimetric indices (MMIs) as a means of providing integrative measures of ecosystem condition is becoming widespread. An increasingly recognized problem for the interpretability of MMIs is controlling for the potentially confounding influences of environmental covariates. Most common approaches to handling covariates are based on simple notions of statistical control, leaving the causal implications of covariates and their adjustment unstated. In this paper, we use graphical models to examine some of the potential impacts of environmental covariates on the observed signals between human disturbance and potential response metrics. Using simulations based on various causal networks, we show how environmental covariates can both obscure and exaggerate the effects of human disturbance on individual metrics. We then examine from a causal interpretation standpoint the common practice of adjusting ecological metrics for environmental influences using only the set of sites deemed to be in reference condition. We present and examine the performance of an alternative approach to metric adjustment that uses the whole set of sites and models both environmental and human disturbance effects simultaneously. The findings from our analyses indicate that failing to model and adjust metrics can result in a systematic bias towards those metrics in which environmental covariates function to artificially strengthen the metric–disturbance relationship resulting in MMIs that do not accurately measure impacts of human disturbance. We also find that a “whole-set modeling approach” requires fewer assumptions and is more efficient with the given information than the more commonly applied “reference-set” approach.
Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data
2014-11-01
NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING...we can apply our results to problems of attrition in which missingness is a severe obstacle to sound inferences. Related works are discussed in...due to the collider path between Y and Ry ). 8 Related Work Deletion based methods such as listwise deletion that are easy to understand as well as
Hip fracture in the elderly: a re-analysis of the EPIDOS study with causal Bayesian networks.
Caillet, Pascal; Klemm, Sarah; Ducher, Michel; Aussem, Alexandre; Schott, Anne-Marie
2015-01-01
Hip fractures commonly result in permanent disability, institutionalization or death in elderly. Existing hip-fracture predicting tools are underused in clinical practice, partly due to their lack of intuitive interpretation. By use of a graphical layer, Bayesian network models could increase the attractiveness of fracture prediction tools. Our aim was to study the potential contribution of a causal Bayesian network in this clinical setting. A logistic regression was performed as a standard control approach to check the robustness of the causal Bayesian network approach. EPIDOS is a multicenter study, conducted in an ambulatory care setting in five French cities between 1992 and 1996 and updated in 2010. The study included 7598 women aged 75 years or older, in which fractures were assessed quarterly during 4 years. A causal Bayesian network and a logistic regression were performed on EPIDOS data to describe major variables involved in hip fractures occurrences. Both models had similar association estimations and predictive performances. They detected gait speed and mineral bone density as variables the most involved in the fracture process. The causal Bayesian network showed that gait speed and bone mineral density were directly connected to fracture and seem to mediate the influence of all the other variables included in our model. The logistic regression approach detected multiple interactions involving psychotropic drug use, age and bone mineral density. Both approaches retrieved similar variables as predictors of hip fractures. However, Bayesian network highlighted the whole web of relation between the variables involved in the analysis, suggesting a possible mechanism leading to hip fracture. According to the latter results, intervention focusing concomitantly on gait speed and bone mineral density may be necessary for an optimal prevention of hip fracture occurrence in elderly people.
Three essays on price dynamics and causations among energy markets and macroeconomic information
NASA Astrophysics Data System (ADS)
Hong, Sung Wook
This dissertation examines three important issues in energy markets: price dynamics, information flow, and structural change. We discuss each issue in detail, building empirical time series models, analyzing the results, and interpreting the findings. First, we examine the contemporaneous interdependencies and information flows among crude oil, natural gas, and electricity prices in the United States (US) through the multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) model, Directed Acyclic Graph (DAG) for contemporaneous causal structures and Bernanke factorization for price dynamic processes. Test results show that the DAG from residuals of out-of-sample-forecast is consistent with the DAG from residuals of within-sample-fit. The result supports innovation accounting analysis based on DAGs using residuals of out-of-sample-forecast. Second, we look at the effects of the federal fund rate and/or WTI crude oil price shock on US macroeconomic and financial indicators by using a Factor Augmented Vector Autoregression (FAVAR) model and a graphical model without any deductive assumption. The results show that, in contemporaneous time, the federal fund rate shock is exogenous as the identifying assumption in the Vector Autoregression (VAR) framework of the monetary shock transmission mechanism, whereas the WTI crude oil price return is not exogenous. Third, we examine price dynamics and contemporaneous causality among the price returns of WTI crude oil, gasoline, corn, and the S&P 500. We look for structural break points and then build an econometric model to find the consistent sub-periods having stable parameters in a given VAR framework and to explain recent movements and interdependency among returns. We found strong evidence of two structural breaks and contemporaneous causal relationships among the residuals, but also significant differences between contemporaneous causal structures for each sub-period.
Spatial-temporal causal modeling: a data centric approach to climate change attribution (Invited)
NASA Astrophysics Data System (ADS)
Lozano, A. C.
2010-12-01
Attribution of climate change has been predominantly based on simulations using physical climate models. These approaches rely heavily on the employed models and are thus subject to their shortcomings. Given the physical models’ limitations in describing the complex system of climate, we propose an alternative approach to climate change attribution that is data centric in the sense that it relies on actual measurements of climate variables and human and natural forcing factors. We present a novel class of methods to infer causality from spatial-temporal data, as well as a procedure to incorporate extreme value modeling into our methodology in order to address the attribution of extreme climate events. We develop a collection of causal modeling methods using spatio-temporal data that combine graphical modeling techniques with the notion of Granger causality. “Granger causality” is an operational definition of causality from econometrics, which is based on the premise that if a variable causally affects another, then the past values of the former should be helpful in predicting the future values of the latter. In its basic version, our methodology makes use of the spatial relationship between the various data points, but treats each location as being identically distributed and builds a unique causal graph that is common to all locations. A more flexible framework is then proposed that is less restrictive than having a single causal graph common to all locations, while avoiding the brittleness due to data scarcity that might arise if one were to independently learn a different graph for each location. The solution we propose can be viewed as finding a middle ground by partitioning the locations into subsets that share the same causal structures and pooling the observations from all the time series belonging to the same subset in order to learn more robust causal graphs. More precisely, we make use of relationships between locations (e.g. neighboring relationship) by defining a relational graph in which related locations are connected (note that this relational graph, which represents relationships among the different locations, is distinct from the causal graph, which represents causal relationships among the individual variables - e.g. temperature, pressure- within a multivariate time series). We then define a hidden Markov Random Field (hMRF), assigning a hidden state to each node (location), with the state assignment guided by the prior information encoded in the relational graph. Nodes that share the same state in the hMRF model will have the same causal graph. State assignment can thus shed light on unknown relations among locations (e.g. teleconnection). While the model has been described in terms of hard location partitioning to facilitate its exposition, in fact a soft partitioning is maintained throughout learning. This leads to a form of transfer learning, which makes our model applicable even in situations where partitioning the locations might not seem appropriate. We first validate the effectiveness of our methodology on synthetic datasets, and then apply it to actual climate measurement data. The experimental results show that our approach offers a useful alternative to the simulation-based approach for climate modeling and attribution, and has the capability to provide valuable scientific insights from a new perspective.
Global drivers of the stratospheric polar vortex via nonlinear causal discovery
NASA Astrophysics Data System (ADS)
Kretschmer, M.; Runge, J.; Coumou, D.
2016-12-01
The stratospheric polar vortex plays a major role in the Northern Hemisphere midlatitudes, especially in driving extreme weather conditions. Many different global drivers, from Arctic sea ice to tropical climate patterns, are hypothesized to influence its stability, including linear and nonlinear mechanisms. Here a novel causal discovery approach, extending previous work [1], that is adapted to the particular challenges posed by such a high-dimensional dataset comprised of multiple, possibly nonlinearly coupled time series is demonstrated. While links in the reconstructed network can be called causal only with respect to the set of analyzed variables, the absence of causal links allows to assess where physical mechanisms are unlikely.The present work confirms recent results obtained with a similar, but linear, approach [2], regarding the impact of Barents and Kara sea ice concentrations, and extends the analysis also to tropical drivers to cover more proposed mechanisms. [1] Jakob Runge, Vladimir Petoukhov, and Jürgen Kurths, 2014: Quantifying the Strength and Delay of Climatic Interactions: The Ambiguities of Cross Correlation and a Novel Measure Based on Graphical Models. J. Climate 27, 720-739, doi: 10.1175/JCLI-D-13-00159.1.[2] Marlene Kretschmer, Dim Coumou, Jonathan F. Donges, and Jakob Runge, 2016: Using Causal Effect Networks to Analyze Different Arctic Drivers of Midlatitude Winter Circulation. J. Climate 29, 4069-4081, doi: 10.1175/JCLI-D-15-0654.1.
NASA Technical Reports Server (NTRS)
1972-01-01
The detailed abort sequence trees for the reference zirconium hydride (ZrH) reactor power module that have been generated for each phase of the reference Space Base program mission are presented. The trees are graphical representations of causal sequences. Each tree begins with the phase identification and the dichotomy between success and failure. The success branch shows the mission phase objective as being achieved. The failure branch is subdivided, as conditions require, into various primary initiating abort conditions.
Zhang, Qin
2015-07-01
Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.
Ren, J; Jenkinson, I; Wang, J; Xu, D L; Yang, J B
2008-01-01
Focusing on people and organizations, this paper aims to contribute to offshore safety assessment by proposing a methodology to model causal relationships. The methodology is proposed in a general sense that it will be capable of accommodating modeling of multiple risk factors considered in offshore operations and will have the ability to deal with different types of data that may come from different resources. Reason's "Swiss cheese" model is used to form a generic offshore safety assessment framework, and Bayesian Network (BN) is tailored to fit into the framework to construct a causal relationship model. The proposed framework uses a five-level-structure model to address latent failures within the causal sequence of events. The five levels include Root causes level, Trigger events level, Incidents level, Accidents level, and Consequences level. To analyze and model a specified offshore installation safety, a BN model was established following the guideline of the proposed five-level framework. A range of events was specified, and the related prior and conditional probabilities regarding the BN model were assigned based on the inherent characteristics of each event. This paper shows that Reason's "Swiss cheese" model and BN can be jointly used in offshore safety assessment. On the one hand, the five-level conceptual model is enhanced by BNs that are capable of providing graphical demonstration of inter-relationships as well as calculating numerical values of occurrence likelihood for each failure event. Bayesian inference mechanism also makes it possible to monitor how a safety situation changes when information flow travel forwards and backwards within the networks. On the other hand, BN modeling relies heavily on experts' personal experiences and is therefore highly domain specific. "Swiss cheese" model is such a theoretic framework that it is based on solid behavioral theory and therefore can be used to provide industry with a roadmap for BN modeling and implications. A case study of the collision risk between a Floating Production, Storage and Offloading (FPSO) unit and authorized vessels caused by human and organizational factors (HOFs) during operations is used to illustrate an industrial application of the proposed methodology.
Verdugo, Ricardo A; Zeller, Tanja; Rotival, Maxime; Wild, Philipp S; Münzel, Thomas; Lackner, Karl J; Weidmann, Henri; Ninio, Ewa; Trégouët, David-Alexandre; Cambien, François; Blankenberg, Stefan; Tiret, Laurence
2013-01-01
Smoking is a risk factor for atherosclerosis with reported widespread effects on gene expression in circulating blood cells. We hypothesized that a molecular signature mediating the relation between smoking and atherosclerosis may be found in the transcriptome of circulating monocytes. Genome-wide expression profiles and counts of atherosclerotic plaques in carotid arteries were collected in 248 smokers and 688 non-smokers from the general population. Patterns of co-expressed genes were identified by Independent Component Analysis (ICA) and network structure of the pattern-specific gene modules was inferred by the PC-algorithm. A likelihood-based causality test was implemented to select patterns that fit models containing a path "smoking→gene expression→plaques". Robustness of the causal inference was assessed by bootstrapping. At a FDR ≤0.10, 3,368 genes were associated to smoking or plaques, of which 93% were associated to smoking only. SASH1 showed the strongest association to smoking and PPARG the strongest association to plaques. Twenty-nine gene patterns were identified by ICA. Modules containing SASH1 and PPARG did not show evidence for the "smoking→gene expression→plaques" causality model. Conversely, three modules had good support for causal effects and exhibited a network topology consistent with gene expression mediating the relation between smoking and plaques. The network with the strongest support for causal effects was connected to plaques through SLC39A8, a gene with known association to HDL-cholesterol and cellular uptake of cadmium from tobacco, while smoking was directly connected to GAS6, a gene reported to have anti-inflammatory effects in atherosclerosis and to be up-regulated in the placenta of women smoking during pregnancy. Our analysis of the transcriptome of monocytes recovered genes relevant for association to smoking and atherosclerosis, and connected genes that before, were only studied in separate contexts. Inspection of correlation structure revealed candidates that would be missed by expression-phenotype association analysis alone.
Verdugo, Ricardo A.; Zeller, Tanja; Rotival, Maxime; Wild, Philipp S.; Münzel, Thomas; Lackner, Karl J.; Weidmann, Henri; Ninio, Ewa; Trégouët, David-Alexandre; Cambien, François; Blankenberg, Stefan; Tiret, Laurence
2013-01-01
Smoking is a risk factor for atherosclerosis with reported widespread effects on gene expression in circulating blood cells. We hypothesized that a molecular signature mediating the relation between smoking and atherosclerosis may be found in the transcriptome of circulating monocytes. Genome-wide expression profiles and counts of atherosclerotic plaques in carotid arteries were collected in 248 smokers and 688 non-smokers from the general population. Patterns of co-expressed genes were identified by Independent Component Analysis (ICA) and network structure of the pattern-specific gene modules was inferred by the PC-algorithm. A likelihood-based causality test was implemented to select patterns that fit models containing a path “smoking→gene expression→plaques”. Robustness of the causal inference was assessed by bootstrapping. At a FDR ≤0.10, 3,368 genes were associated to smoking or plaques, of which 93% were associated to smoking only. SASH1 showed the strongest association to smoking and PPARG the strongest association to plaques. Twenty-nine gene patterns were identified by ICA. Modules containing SASH1 and PPARG did not show evidence for the “smoking→gene expression→plaques” causality model. Conversely, three modules had good support for causal effects and exhibited a network topology consistent with gene expression mediating the relation between smoking and plaques. The network with the strongest support for causal effects was connected to plaques through SLC39A8, a gene with known association to HDL-cholesterol and cellular uptake of cadmium from tobacco, while smoking was directly connected to GAS6, a gene reported to have anti-inflammatory effects in atherosclerosis and to be up-regulated in the placenta of women smoking during pregnancy. Our analysis of the transcriptome of monocytes recovered genes relevant for association to smoking and atherosclerosis, and connected genes that before, were only studied in separate contexts. Inspection of correlation structure revealed candidates that would be missed by expression-phenotype association analysis alone. PMID:23372645
A Correlational Study of Graphic Organizers and Science Achievement of English Language Learners
NASA Astrophysics Data System (ADS)
Clarke, William Gordon
English language learners (ELLs) demonstrate lower academic performance and have lower graduation and higher dropout rates than their non-ELL peers. The primary purpose of this correlational quantitative study was to investigate the relationship between the use of graphic organizer-infused science instruction and science learning of high school ELLs. Another objective was to determine if the method of instruction, socioeconomic status (SES), gender, and English language proficiency (ELP) were predictors of academic achievement of high school ELLs. Data were gathered from a New York City (NYC) high school fall 2012-2013 archival records of 145 ninth-grade ELLs who had received biology instruction in freestanding English as a second language (ESL) classes, followed by a test of their learning of the material. Fifty-four (37.2%) of these records were of students who had learned science by the conventional textbook method, and 91 (62.8%) by using graphic organizers. Data analysis employed the Statistical Package for the Social Sciences (SPSS) software for multiple regression analysis, which found graphic organizer use to be a significant predictor of New York State Regents Living Environment (NYSRLE) test scores (p < .01). One significant regression model was returned whereby, when combined, the four predictor variables (method of instruction, SES, gender, and ELP) explained 36% of the variance of the NYSRLE score. Implications of the study findings noted graphic organizer use as advantageous for ELL science achievement. Recommendations made for practice were for (a) the adoption of graphic organizer infused-instruction, (b) establishment of a protocol for the implementation of graphic organizer-infused instruction, and (c) increased length of graphic organizer instructional time. Recommendations made for future research were (a) a replication quantitative correlational study in two or more high schools, (b) a quantitative quasi-experimental quantitative study to determine the influence of graphic organizer instructional intervention and ELL science achievement, (c) a quantitative quasi-experimental study to determine the effect of teacher-based factors on graphic organizer-infused instruction, and (c) a causal comparative study to determine the efficacy of graphic organizer use in testing modifications for high school ELL science.
Discovering graphical Granger causality using the truncating lasso penalty
Shojaie, Ali; Michailidis, George
2010-01-01
Motivation: Components of biological systems interact with each other in order to carry out vital cell functions. Such information can be used to improve estimation and inference, and to obtain better insights into the underlying cellular mechanisms. Discovering regulatory interactions among genes is therefore an important problem in systems biology. Whole-genome expression data over time provides an opportunity to determine how the expression levels of genes are affected by changes in transcription levels of other genes, and can therefore be used to discover regulatory interactions among genes. Results: In this article, we propose a novel penalization method, called truncating lasso, for estimation of causal relationships from time-course gene expression data. The proposed penalty can correctly determine the order of the underlying time series, and improves the performance of the lasso-type estimators. Moreover, the resulting estimate provides information on the time lag between activation of transcription factors and their effects on regulated genes. We provide an efficient algorithm for estimation of model parameters, and show that the proposed method can consistently discover causal relationships in the large p, small n setting. The performance of the proposed model is evaluated favorably in simulated, as well as real, data examples. Availability: The proposed truncating lasso method is implemented in the R-package ‘grangerTlasso’ and is freely available at http://www.stat.lsa.umich.edu/∼shojaie/ Contact: shojaie@umich.edu PMID:20823316
Guidelines for a graph-theoretic implementation of structural equation modeling
Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William
2012-01-01
Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for an updated definition of the SEM process that subsumes the historical matrix approach under a graph-theory implementation. The implementation is also designed to permit complex specifications and to be compatible with various estimation methods. Finally, they are meant to foster the use of probabilistic reasoning in both retrospective and prospective considerations of the quantitative implications of the results.
Carriger, John F; Dyson, Brian E; Benson, William H
2018-01-15
This article develops and explores a methodology for using qualitative influence diagrams in environmental policy and management to support decision making efforts that minimize risk and increase resiliency. Influence diagrams are representations of the conditional aspects of a problem domain. Their graphical properties are useful for structuring causal knowledge relevant to policy interventions and can be used to enhance inference and inclusivity of multiple viewpoints. Qualitative components of influence diagrams are beneficial tools for identifying and examining the interactions among the critical variables in complex policy development and implementation. Policy interventions on social-environmental systems can be intuitively diagrammed for representing knowledge of critical relationships among economic, environmental, and social attributes. Examples relevant to coastal resiliency issues in the U.S. Gulf Coast region are developed to illustrate model structures for developing qualitative influence diagrams useful for clarifying important policy intervention issues and enhancing transparency in decision making. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Bayesian network modelling of upper gastrointestinal bleeding
NASA Astrophysics Data System (ADS)
Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri
2013-09-01
Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.
Augmenting the Refutation Text Effect with Analogies and Graphics
ERIC Educational Resources Information Center
Danielson, Robert W.; Sinatra, Gale M.; Kendeou, Panayiota
2016-01-01
Refutation texts have been shown to be effective at promoting knowledge revision. It has been suggested that refutation texts are most effective when the misconception and the correct information are co-activated and integrated with causal networks that support the correct information. We explored two augmentations to a refutation text that might…
Bounded-Degree Approximations of Stochastic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less
Causal judgments about empirical information in an interrupted time series design.
White, Peter A
2016-07-19
Empirical information available for causal judgment in everyday life tends to take the form of quasi-experimental designs, lacking control groups, more than the form of contingency information that is usually presented in experiments. Stimuli were presented in which values of an outcome variable for a single individual were recorded over six time periods, and an intervention was introduced between the fifth and sixth time periods. Participants judged whether and how much the intervention affected the outcome. With numerical stimulus information, judgments were higher for a pre-intervention profile in which all values were the same than for pre-intervention profiles with any other kind of trend. With graphical stimulus information, judgments were more sensitive to trends, tending to be higher when an increase after the intervention was preceded by a decreasing series than when it was preceded by an increasing series ending on the same value at the fifth time period. It is suggested that a feature-analytic model, in which the salience of different features of information varies between presentation formats, may provide the best prospect of explaining the results.
Assessing Sensitivity to Unmeasured Confounding Using a Simulated Potential Confounder
ERIC Educational Resources Information Center
Carnegie, Nicole Bohme; Harada, Masataka; Hill, Jennifer L.
2016-01-01
A major obstacle to developing evidenced-based policy is the difficulty of implementing randomized experiments to answer all causal questions of interest. When using a nonexperimental study, it is critical to assess how much the results could be affected by unmeasured confounding. We present a set of graphical and numeric tools to explore the…
Development and Testing of Data Mining Algorithms for Earth Observation
NASA Technical Reports Server (NTRS)
Glymour, Clark
2005-01-01
The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.
EEG and MEG data analysis in SPM8.
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.
On Crowd-verification of Biological Networks
Ansari, Sam; Binder, Jean; Boue, Stephanie; Di Fabio, Anselmo; Hayes, William; Hoeng, Julia; Iskandar, Anita; Kleiman, Robin; Norel, Raquel; O’Neel, Bruce; Peitsch, Manuel C.; Poussin, Carine; Pratt, Dexter; Rhrissorrakrai, Kahn; Schlage, Walter K.; Stolovitzky, Gustavo; Talikka, Marja
2013-01-01
Biological networks with a structured syntax are a powerful way of representing biological information generated from high density data; however, they can become unwieldy to manage as their size and complexity increase. This article presents a crowd-verification approach for the visualization and expansion of biological networks. Web-based graphical interfaces allow visualization of causal and correlative biological relationships represented using Biological Expression Language (BEL). Crowdsourcing principles enable participants to communally annotate these relationships based on literature evidences. Gamification principles are incorporated to further engage domain experts throughout biology to gather robust peer-reviewed information from which relationships can be identified and verified. The resulting network models will represent the current status of biological knowledge within the defined boundaries, here processes related to human lung disease. These models are amenable to computational analysis. For some period following conclusion of the challenge, the published models will remain available for continuous use and expansion by the scientific community. PMID:24151423
EEG and MEG Data Analysis in SPM8
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221
Data-driven Modeling of Metal-oxide Sensors with Dynamic Bayesian Networks
NASA Astrophysics Data System (ADS)
Gosangi, Rakesh; Gutierrez-Osuna, Ricardo
2011-09-01
We present a data-driven probabilistic framework to model the transient response of MOX sensors modulated with a sequence of voltage steps. Analytical models of MOX sensors are usually built based on the physico-chemical properties of the sensing materials. Although building these models provides an insight into the sensor behavior, they also require a thorough understanding of the underlying operating principles. Here we propose a data-driven approach to characterize the dynamical relationship between sensor inputs and outputs. Namely, we use dynamic Bayesian networks (DBNs), probabilistic models that represent temporal relations between a set of random variables. We identify a set of control variables that influence the sensor responses, create a graphical representation that captures the causal relations between these variables, and finally train the model with experimental data. We validated the approach on experimental data in terms of predictive accuracy and classification performance. Our results show that DBNs can accurately predict the dynamic response of MOX sensors, as well as capture the discriminatory information present in the sensor transients.
Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.
Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William
2017-01-01
Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.
Graphical fault tree analysis for fatal falls in the construction industry.
Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari
2014-11-01
The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model
NASA Astrophysics Data System (ADS)
Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.
2012-12-01
The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that the system's structure generates its behavior; and STELLA®'s graphical interface allows researchers at multiple educational levels to observe patterns and trends as the system changes over time. Graduate students and postdoctoral researchers will utilize these initial models to more efficiently communicate and transfer knowledge across disciplines prior to generating more novel and complex disease risk models. The hope is that these models will improve causal viewpoints, understanding of the system patterns, and how to best mitigate disease risk across multiple spatial scales. Yasar O, Landau RH (2003) Elements of computational science and engineering education. Siam Review 45(4): 787-805.
Carriger, John F; Dyson, Brian E; Benson, William H
2018-05-01
This article develops and explores a methodology for using qualitative influence diagrams in environmental policy and management to support decision-making efforts that minimize risk and increase resiliency. Influence diagrams are representations of the conditional aspects of a problem domain. Their graphical properties are useful for structuring causal knowledge relevant to policy interventions and can be used to enhance inference and inclusivity of multiple viewpoints. Qualitative components of influence diagrams are beneficial tools for identifying and examining the interactions among the critical variables in complex policy development and implementation. Policy interventions on social-environmental systems can be intuitively diagrammed for representing knowledge of critical relationships among economic, environmental, and social attributes. Examples relevant to coastal resiliency issues in the US Gulf Coast region are developed to illustrate model structures for developing qualitative influence diagrams useful for clarifying important policy intervention issues and enhancing transparency in decision making. Integr Environ Assess Manag 2018;14:381-394. Published 2018. This article is a US Government work and is in the public domain in the USA. Published 2018. This article is a US Government work and is in the public domain in the USA.
McNally, Richard J.; Heeren, Alexandre; Robinaugh, Donald J.
2017-01-01
ABSTRACT Background: The network approach to mental disorders offers a novel framework for conceptualizing posttraumatic stress disorder (PTSD) as a causal system of interacting symptoms. Objective: In this study, we extended this work by estimating the structure of relations among PTSD symptoms in adults reporting personal histories of childhood sexual abuse (CSA; N = 179). Method: We employed two complementary methods. First, using the graphical LASSO, we computed a sparse, regularized partial correlation network revealing associations (edges) between pairs of PTSD symptoms (nodes). Next, using a Bayesian approach, we computed a directed acyclic graph (DAG) to estimate a directed, potentially causal model of the relations among symptoms. Results: For the first network, we found that physiological reactivity to reminders of trauma, dreams about the trauma, and lost of interest in previously enjoyed activities were highly central nodes. However, stability analyses suggest that these findings were unstable across subsets of our sample. The DAG suggests that becoming physiologically reactive and upset in response to reminders of the trauma may be key drivers of other symptoms in adult survivors of CSA. Conclusions: Our study illustrates the strengths and limitations of these network analytic approaches to PTSD. PMID:29038690
Causal Video Object Segmentation From Persistence of Occlusions
2015-05-01
Precision, recall, and F-measure are reported on the ground truth anno - tations converted to binary masks. Note we cannot evaluate “number of...to lack of occlusions. References [1] P. Arbelaez, M. Maire, C. Fowlkes, and J . Malik. Con- tour detection and hierarchical image segmentation. TPAMI...X. Bai, J . Wang, D. Simons, and G. Sapiro. Video snapcut: robust video object cutout using localized classifiers. In ACM Transactions on Graphics
Strategic Options Development and Analysis
NASA Astrophysics Data System (ADS)
Ackermann, Fran; Eden, Colin
Strategic Options Development and Analysis (SODA) enables a group or individual to construct a graphical representation of a problematic situation, and thus explore options and their ramifications with respect to a complex system of goals or objectives. In addition the method aims to help groups arrive at a negotiated agreement about how to act to resolve the situation. It is based upon the use of causal mapping - a formally constructed means-ends network - as representation form. Because the picture has been constructed using the natural language of the problem owners it becomes a model of the situation that is ‘owned' by those who define the problem. The use of formalities for the construction of the model makes it amenable to a range of analyses as well as encouraging reflection and a deeper understanding. These analyses can be used in a ‘rough and ready' manner by visual inspection or through the use of specialist causal mapping software (Decision Explorer). Each of the analyses helps a group or individual discover important features of the problem situation, and these features facilitate agreeing agood solution. The SODA process is aimed at helping a group learn about the situation they face before they reach agreements. Most significantly the exploration through the causal map leads to a higher probability of more creative solutions and promotes solutions that are more likely to be implemented because the problem construction process is wider and more likely to include richer social dimensions about the blockages to action and organizational change. The basic theories that inform SODA derive from cognitive psychology and social negotiation, where the model acts as a continuously changing representation of the problematic situation - changing as the views of a person or group shift through learning and exploration. This chapter, jointly written by two leading practitioner academics and the original developers of SODA, Colin Eden and Fran Ackermann, describes the SODA techniques as they are applied in practice.
Emergent 1d Ising Behavior in AN Elementary Cellular Automaton Model
NASA Astrophysics Data System (ADS)
Kassebaum, Paul G.; Iannacchione, Germano S.
The fundamental nature of an evolving one-dimensional (1D) Ising model is investigated with an elementary cellular automaton (CA) simulation. The emergent CA simulation employs an ensemble of cells in one spatial dimension, each cell capable of two microstates interacting with simple nearest-neighbor rules and incorporating an external field. The behavior of the CA model provides insight into the dynamics of coupled two-state systems not expressible by exact analytical solutions. For instance, state progression graphs show the causal dynamics of a system through time in relation to the system's entropy. Unique graphical analysis techniques are introduced through difference patterns, diffusion patterns, and state progression graphs of the 1D ensemble visualizing the evolution. All analyses are consistent with the known behavior of the 1D Ising system. The CA simulation and new pattern recognition techniques are scalable (in both dimension, complexity, and size) and have many potential applications such as complex design of materials, control of agent systems, and evolutionary mechanism design.
Learning to learn causal models.
Kemp, Charles; Goodman, Noah D; Tenenbaum, Joshua B
2010-09-01
Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning. Copyright © 2010 Cognitive Science Society, Inc.
Wolff, Phillip; Barbey, Aron K.
2015-01-01
Causal composition allows people to generate new causal relations by combining existing causal knowledge. We introduce a new computational model of such reasoning, the force theory, which holds that people compose causal relations by simulating the processes that join forces in the world, and compare this theory with the mental model theory (Khemlani et al., 2014) and the causal model theory (Sloman et al., 2009), which explain causal composition on the basis of mental models and structural equations, respectively. In one experiment, the force theory was uniquely able to account for people's ability to compose causal relationships from complex animations of real-world events. In three additional experiments, the force theory did as well as or better than the other two theories in explaining the causal compositions people generated from linguistically presented causal relations. Implications for causal learning and the hierarchical structure of causal knowledge are discussed. PMID:25653611
Causal discovery and inference: concepts and recent methodological advances.
Spirtes, Peter; Zhang, Kun
This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference.
A quantum causal discovery algorithm
NASA Astrophysics Data System (ADS)
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Which causal structures might support a quantum-classical gap?
NASA Astrophysics Data System (ADS)
Pienaar, Jacques
2017-04-01
A causal scenario is a graph that describes the cause and effect relationships between all relevant variables in an experiment. A scenario is deemed ‘not interesting’ if there is no device-independent way to distinguish the predictions of classical physics from any generalised probabilistic theory (including quantum mechanics). Conversely, an interesting scenario is one in which there exists a gap between the predictions of different operational probabilistic theories, as occurs for example in Bell-type experiments. Henson, Lal and Pusey (HLP) recently proposed a sufficient condition for a causal scenario to not be interesting. In this paper we supplement their analysis with some new techniques and results. We first show that existing graphical techniques due to Evans can be used to confirm by inspection that many graphs are interesting without having to explicitly search for inequality violations. For three exceptional cases—the graphs numbered \\#15,16,20 in HLP—we show that there exist non-Shannon type entropic inequalities that imply these graphs are interesting. In doing so, we find that existing methods of entropic inequalities can be greatly enhanced by conditioning on the specific values of certain variables.
Staplin, Natalie; Herrington, William G; Judge, Parminder K; Reith, Christina A; Haynes, Richard; Landray, Martin J; Baigent, Colin; Emberson, Jonathan
2017-03-07
Observational studies often seek to estimate the causal relevance of an exposure to an outcome of interest. However, many possible biases can arise when estimating such relationships, in particular bias because of confounding. To control for confounding properly, careful consideration of the nature of the assumed relationships between the exposure, the outcome, and other characteristics is required. Causal diagrams provide a simple graphic means of displaying such relationships, describing the assumptions made, and allowing for the identification of a set of characteristics that should be taken into account ( i.e. , adjusted for) in any analysis. Furthermore, causal diagrams can be used to identify other possible sources of bias (such as selection bias), which if understood from the outset, can inform the planning of appropriate analyses. In this article, we review the basic theory of causal diagrams and describe some of the methods available to identify which characteristics need to be taken into account when estimating the total effect of an exposure on an outcome. In doing so, we review the concept of collider bias and show how it is inappropriate to adjust for characteristics that may be influenced, directly or indirectly, by both the exposure and the outcome of interest. A motivating example is taken from the Study of Heart and Renal Protection, in which the relevance of smoking to progression to ESRD is considered. Copyright © 2017 by the American Society of Nephrology.
Existing agricultural ecosystem in China leads to environmental pollution: an econometric approach.
Hongdou, Lei; Shiping, Li; Hao, Li
2018-06-17
Sustainable agriculture ensures food security and prevents starvation. However, the need to meet the increasing food demands of the growing population has led to poor and unsustainable agricultural practices, which promote environmental degradation. Given the contributions of agricultural ecosystems to environmental pollution, we investigated the impact of the agricultural ecosystem on environmental pollution in China using time series data from 1960 to 2014. We employed several methods for econometric analysis including the unit root test, Johansen test of cointegration, Granger causality test, and vector error correction model. Evidence based on the long-run elasticity indicates that a 1% increase in the emissions of carbon dioxide (CO 2 ) equivalent to nitrous oxide from synthetic fertilizers will increase the emissions of CO 2 by 1.52% in the long run. Similarly, a 1% increase in the area of harvested rice paddy, cereal production, biomass of burned crop residues, and agricultural GDP will increase the carbon dioxide emissions by 0.85, 0.63, 0.37, and 0.22%, respectively. The estimated results indicate that there are long-term equilibrium relationships among the selected variables considered for the agricultural ecosystem and carbon dioxide emissions. In particular, we identified bidirectional causal associations between CO 2 emissions, biomass of burned crop residues, and cereal production. Graphical abstract ᅟ.
Designing Better Scaffolding in Teaching Complex Systems with Graphical Simulations
NASA Astrophysics Data System (ADS)
Li, Na
Complex systems are an important topic in science education today, but they are usually difficult for secondary-level students to learn. Although graphic simulations have many advantages in teaching complex systems, scaffolding is a critical factor for effective learning. This dissertation study was conducted around two complementary research questions on scaffolding: (1) How can we chunk and sequence learning activities in teaching complex systems? (2) How can we help students make connections among system levels across learning activities (level bridging)? With a sample of 123 seventh-graders, this study employed a 3x2 experimental design that factored sequencing methods (independent variable 1; three levels) with level-bridging scaffolding (independent variable 2; two levels) and compared the effectiveness of each combination. The study measured two dependent variables: (1) knowledge integration (i.e., integrating and connecting content-specific normative concepts and providing coherent scientific explanations); (2) understanding of the deep causal structure (i.e., being able to grasp and transfer the causal knowledge of a complex system). The study used a computer-based simulation environment as the research platform to teach the ideal gas law as a system. The ideal gas law is an emergent chemical system that has three levels: (1) experiential macro level (EM) (e.g., an aerosol can explodes when it is thrown into the fire); (2) abstract macro level (AM) (i.e., the relationships among temperature, pressure and volume); (3) micro level (Mi) (i.e., molecular activity). The sequencing methods of these levels were manipulated by changing the order in which they were delivered with three possibilities: (1) EM-AM-Mi; (2) Mi-AM-EM; (3) AM-Mi-EM. The level-bridging scaffolding variable was manipulated on two aspects: (1) inserting inter-level questions among learning activities; (2) two simulations dynamically linked in the final learning activity. Addressing the first research question, the Experiential macro-Abstract macro-Micro (EM-AM-Mi) sequencing method, following the "concrete to abstract" principle, produced better knowledge integration while the Micro-Abstract macro-Experiential macro (Mi-AM-EM) sequencing method, congruent with the causal direction of the emergent system, produced better understanding of the deep causal structure only when level-bridging scaffolding was provided. The Abstract macro-Micro-Experiential macro (AM-Mi-EM) sequencing method produced worse performance in general, because it did not follow the "concrete to abstract" principle, nor did it align with the causal structure of the emergent system. As to the second research question, the results showed that level-bridging scaffolding was important for both knowledge integration and understanding of the causal structure in learning the ideal gas law system.
DYNAMO-HIA--a Dynamic Modeling tool for generic Health Impact Assessments.
Lhachimi, Stefan K; Nusselder, Wilma J; Smit, Henriette A; van Baal, Pieter; Baili, Paolo; Bennett, Kathleen; Fernández, Esteve; Kulik, Margarete C; Lobstein, Tim; Pomerleau, Joceline; Mackenbach, Johan P; Boshuizen, Hendriek C
2012-01-01
Currently, no standard tool is publicly available that allows researchers or policy-makers to quantify the impact of policies using epidemiological evidence within the causal framework of Health Impact Assessment (HIA). A standard tool should comply with three technical criteria (real-life population, dynamic projection, explicit risk-factor states) and three usability criteria (modest data requirements, rich model output, generally accessible) to be useful in the applied setting of HIA. With DYNAMO-HIA (Dynamic Modeling for Health Impact Assessment), we introduce such a generic software tool specifically designed to facilitate quantification in the assessment of the health impacts of policies. DYNAMO-HIA quantifies the impact of user-specified risk-factor changes on multiple diseases and in turn on overall population health, comparing one reference scenario with one or more intervention scenarios. The Markov-based modeling approach allows for explicit risk-factor states and simulation of a real-life population. A built-in parameter estimation module ensures that only standard population-level epidemiological evidence is required, i.e. data on incidence, prevalence, relative risks, and mortality. DYNAMO-HIA provides a rich output of summary measures--e.g. life expectancy and disease-free life expectancy--and detailed data--e.g. prevalences and mortality/survival rates--by age, sex, and risk-factor status over time. DYNAMO-HIA is controlled via a graphical user interface and is publicly available from the internet, ensuring general accessibility. We illustrate the use of DYNAMO-HIA with two example applications: a policy causing an overall increase in alcohol consumption and quantifying the disease-burden of smoking. By combining modest data needs with general accessibility and user friendliness within the causal framework of HIA, DYNAMO-HIA is a potential standard tool for health impact assessment based on epidemiologic evidence.
A network perspective on comorbid depression in adolescents with obsessive-compulsive disorder.
Jones, Payton J; Mair, Patrick; Riemann, Bradley C; Mugno, Beth L; McNally, Richard J
2018-01-01
People with obsessive-compulsive disorder [OCD] frequently suffer from depression, a comorbidity associated with greater symptom severity and suicide risk. We examined the associations between OCD and depression symptoms in 87 adolescents with primary OCD. We computed an association network, a graphical LASSO, and a directed acyclic graph (DAG) to model symptom interactions. Models showed OCD and depression as separate syndromes linked by bridge symptoms. Bridges between the two disorders emerged between obsessional problems in the OCD syndrome, and guilt, concentration problems, and sadness in the depression syndrome. A directed network indicated that OCD symptoms directionally precede depression symptoms. Concentration impairment emerged as a highly central node that may be distinctive to adolescents. We conclude that the network approach to mental disorders provides a new way to understand the etiology and maintenance of comorbid OCD-depression. Network analysis can improve research and treatment of mental disorder comorbidities by generating hypotheses concerning potential causal symptom structures and by identifying symptoms that may bridge disorders. Copyright © 2017 Elsevier Ltd. All rights reserved.
Friston, Karl J.; Bastos, André M.; Oswal, Ashwini; van Wijk, Bernadette; Richter, Craig; Litvak, Vladimir
2014-01-01
This technical paper offers a critical re-evaluation of (spectral) Granger causality measures in the analysis of biological timeseries. Using realistic (neural mass) models of coupled neuronal dynamics, we evaluate the robustness of parametric and nonparametric Granger causality. Starting from a broad class of generative (state-space) models of neuronal dynamics, we show how their Volterra kernels prescribe the second-order statistics of their response to random fluctuations; characterised in terms of cross-spectral density, cross-covariance, autoregressive coefficients and directed transfer functions. These quantities in turn specify Granger causality — providing a direct (analytic) link between the parameters of a generative model and the expected Granger causality. We use this link to show that Granger causality measures based upon autoregressive models can become unreliable when the underlying dynamics is dominated by slow (unstable) modes — as quantified by the principal Lyapunov exponent. However, nonparametric measures based on causal spectral factors are robust to dynamical instability. We then demonstrate how both parametric and nonparametric spectral causality measures can become unreliable in the presence of measurement noise. Finally, we show that this problem can be finessed by deriving spectral causality measures from Volterra kernels, estimated using dynamic causal modelling. PMID:25003817
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
The causal pie model: an epidemiological method applied to evolutionary biology and ecology
Wensink, Maarten; Westendorp, Rudi G J; Baudisch, Annette
2014-01-01
A general concept for thinking about causality facilitates swift comprehension of results, and the vocabulary that belongs to the concept is instrumental in cross-disciplinary communication. The causal pie model has fulfilled this role in epidemiology and could be of similar value in evolutionary biology and ecology. In the causal pie model, outcomes result from sufficient causes. Each sufficient cause is made up of a “causal pie” of “component causes”. Several different causal pies may exist for the same outcome. If and only if all component causes of a sufficient cause are present, that is, a causal pie is complete, does the outcome occur. The effect of a component cause hence depends on the presence of the other component causes that constitute some causal pie. Because all component causes are equally and fully causative for the outcome, the sum of causes for some outcome exceeds 100%. The causal pie model provides a way of thinking that maps into a number of recurrent themes in evolutionary biology and ecology: It charts when component causes have an effect and are subject to natural selection, and how component causes affect selection on other component causes; which partitions of outcomes with respect to causes are feasible and useful; and how to view the composition of a(n apparently homogeneous) population. The diversity of specific results that is directly understood from the causal pie model is a test for both the validity and the applicability of the model. The causal pie model provides a common language in which results across disciplines can be communicated and serves as a template along which future causal analyses can be made. PMID:24963386
The causal pie model: an epidemiological method applied to evolutionary biology and ecology.
Wensink, Maarten; Westendorp, Rudi G J; Baudisch, Annette
2014-05-01
A general concept for thinking about causality facilitates swift comprehension of results, and the vocabulary that belongs to the concept is instrumental in cross-disciplinary communication. The causal pie model has fulfilled this role in epidemiology and could be of similar value in evolutionary biology and ecology. In the causal pie model, outcomes result from sufficient causes. Each sufficient cause is made up of a "causal pie" of "component causes". Several different causal pies may exist for the same outcome. If and only if all component causes of a sufficient cause are present, that is, a causal pie is complete, does the outcome occur. The effect of a component cause hence depends on the presence of the other component causes that constitute some causal pie. Because all component causes are equally and fully causative for the outcome, the sum of causes for some outcome exceeds 100%. The causal pie model provides a way of thinking that maps into a number of recurrent themes in evolutionary biology and ecology: It charts when component causes have an effect and are subject to natural selection, and how component causes affect selection on other component causes; which partitions of outcomes with respect to causes are feasible and useful; and how to view the composition of a(n apparently homogeneous) population. The diversity of specific results that is directly understood from the causal pie model is a test for both the validity and the applicability of the model. The causal pie model provides a common language in which results across disciplines can be communicated and serves as a template along which future causal analyses can be made.
Chibanda, Dixon; Verhey, Ruth; Munetsi, Epiphany; Cowan, Frances M; Lund, Crick
2016-01-01
There is a paucity of data on how to deliver complex interventions that seek to reduce the treatment gap for mental disorders, particularly in sub-Saharan Africa. The need for well-documented protocols which clearly describe the development and the scale-up of programs and interventions is necessary if such interventions are to be replicated elsewhere. This article describes the use of a theory of change (ToC) model to develop a brief psychological intervention for common mental disorders and its' evaluation through a cluster randomized controlled trial in Zimbabwe. A total of eight ToC workshops were held with a range of stakeholders over a 6-month period with a focus on four key components of the program: formative work, piloting, evaluation and scale-up. A ToC map was developed as part of the process with defined causal pathways leading to the desired impact. Interventions, indicators, assumptions and rationale for each point along the causal pathway were considered. Political buy-in from stakeholders together with key resources, which included human, facility/infrastructure, communication and supervision were identified as critical needs using the ToC approach. Ten (10) key interventions with specific indicators, assumptions and rationale formed part of the final ToC map, which graphically illustrated the causal pathway leading to the development of a psychological intervention and the successful implementation of a cluster randomized controlled trial. ToC workshops can enhance stakeholder engagement through an iterative process leading to a shared vision that can improve outcomes of complex mental health interventions particularly where scaling up of the intervention is desired.
Evaluating Social Causality and Responsibility Models: An Initial Report
2005-01-01
ICT Technical Report ICT-TR-03-2005 Evaluating Social Causality and Responsibility ... social intelligent agents. In this report, we present a general computational model of social causality and responsibility , and empirical results of...2005 to 00-00-2005 4. TITLE AND SUBTITLE Evaluating Social Causality and Responsibility Models: An Initial Report 5a. CONTRACT NUMBER 5b. GRANT
Effective connectivity: Influence, causality and biophysical modeling
Valdes-Sosa, Pedro A.; Roebroeck, Alard; Daunizeau, Jean; Friston, Karl
2011-01-01
This is the final paper in a Comments and Controversies series dedicated to “The identification of interacting networks in the brain using fMRI: Model selection, causality and deconvolution”. We argue that discovering effective connectivity depends critically on state-space models with biophysically informed observation and state equations. These models have to be endowed with priors on unknown parameters and afford checks for model Identifiability. We consider the similarities and differences among Dynamic Causal Modeling, Granger Causal Modeling and other approaches. We establish links between past and current statistical causal modeling, in terms of Bayesian dependency graphs and Wiener–Akaike–Granger–Schweder influence measures. We show that some of the challenges faced in this field have promising solutions and speculate on future developments. PMID:21477655
Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.
2013-01-01
Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020
Causal Analysis After Haavelmo
Heckman, James; Pinto, Rodrigo
2014-01-01
Haavelmo's seminal 1943 and 1944 papers are the first rigorous treatment of causality. In them, he distinguished the definition of causal parameters from their identification. He showed that causal parameters are defined using hypothetical models that assign variation to some of the inputs determining outcomes while holding all other inputs fixed. He thus formalized and made operational Marshall's (1890) ceteris paribus analysis. We embed Haavelmo's framework into the recursive framework of Directed Acyclic Graphs (DAGs) used in one influential recent approach to causality (Pearl, 2000) and in the related literature on Bayesian nets (Lauritzen, 1996). We compare the simplicity of an analysis of causality based on Haavelmo's methodology with the complex and nonintuitive approach used in the causal literature of DAGs—the “do-calculus” of Pearl (2009). We discuss the severe limitations of DAGs and in particular of the do-calculus of Pearl in securing identification of economic models. We extend our framework to consider models for simultaneous causality, a central contribution of Haavelmo. In general cases, DAGs cannot be used to analyze models for simultaneous causality, but Haavelmo's approach naturally generalizes to cover them. PMID:25729123
Valente, Bruno D.; Morota, Gota; Peñagaricano, Francisco; Gianola, Daniel; Weigel, Kent; Rosa, Guilherme J. M.
2015-01-01
The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. PMID:25908318
A Causal Model of Faculty Research Productivity.
ERIC Educational Resources Information Center
Bean, John P.
A causal model of faculty research productivity was developed through a survey of the literature. Models of organizational behavior, organizational effectiveness, and motivation were synthesized into a causal model of productivity. Two general types of variables were assumed to affect individual research productivity: institutional variables and…
A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.
Hu, Shoubo; Chen, Zhitang; Chan, Laiwan
2018-05-01
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.
Yavorska, Olena O; Burgess, Stephen
2017-12-01
MendelianRandomization is a software package for the R open-source software environment that performs Mendelian randomization analyses using summarized data. The core functionality is to implement the inverse-variance weighted, MR-Egger and weighted median methods for multiple genetic variants. Several options are available to the user, such as the use of robust regression, fixed- or random-effects models and the penalization of weights for genetic variants with heterogeneous causal estimates. Extensions to these methods, such as allowing for variants to be correlated, can be chosen if appropriate. Graphical commands allow summarized data to be displayed in an interactive graph, or the plotting of causal estimates from multiple methods, for comparison. Although the main method of data entry is directly by the user, there is also an option for allowing summarized data to be incorporated from the PhenoScanner database of genotype-phenotype associations. We hope to develop this feature in future versions of the package. The R software environment is available for download from [https://www.r-project.org/]. The MendelianRandomization package can be downloaded from the Comprehensive R Archive Network (CRAN) within R, or directly from [https://cran.r-project.org/web/packages/MendelianRandomization/]. Both R and the MendelianRandomization package are released under GNU General Public Licenses (GPL-2|GPL-3). © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.
Dang, Shilpa; Chaudhury, Santanu; Lall, Brejesh; Roy, Prasun K
2018-05-01
Effective connectivity (EC) is the methodology for determining functional-integration among the functionally active segregated regions of the brain. By definition EC is "the causal influence exerted by one neuronal group on another" which is constrained by anatomical connectivity (AC) (axonal connections). AC is necessary for EC but does not fully determine it, because synaptic communication occurs dynamically in a context-dependent fashion. Although there is a vast emerging evidence of structure-function relationship using multimodal imaging studies, till date only a few studies have done joint modeling of the two modalities: functional MRI (fMRI) and diffusion tensor imaging (DTI). We aim to propose a unified probabilistic framework that combines information from both sources to learn EC using dynamic Bayesian networks (DBNs). DBNs are probabilistic graphical temporal models that learn EC in an exploratory fashion. Specifically, we propose a novel anatomically informed (AI) score that evaluates fitness of a given connectivity structure to both DTI and fMRI data simultaneously. The AI score is employed in structure learning of DBN given the data. Experiments with synthetic-data demonstrate the face validity of structure learning with our AI score over anatomically uninformed counterpart. Moreover, real-data results are cross-validated by performing classification-experiments. EC inferred on real fMRI-DTI datasets is found to be consistent with previous literature and show promising results in light of the AC present as compared to other classically used techniques such as Granger-causality. Multimodal analyses provide a more reliable basis for differentiating brain under abnormal/diseased conditions than the single modality analysis.
Curating blood: how students' and researchers' drawings bring potential phenomena to light
NASA Astrophysics Data System (ADS)
Hay, D. B.; Pitchford, S.
2016-11-01
This paper explores students and researchers drawings of white blood cell recruitment. The data combines interviews with exhibit of review-type academic images and analyses of student model-drawings. The analysis focuses on the material aspects of bio-scientific data-making and we use the literature of concrete bioscience modelling to differentiate the qualities of students model-making choices: novelty versus reproduction; completeness versus simplicity; and the achievement of similarity towards selected model targets. We show that while drawing on already published images, some third-year undergraduates are able to curate novel, and yet plausible causal channels in their graphic representations, implicating new phenomenal potentials as lead researchers do in their review-type academic publications. Our work links the virtues of drawing to learn to the disclosure of potential epistemic things, involving close attention to the contours of non-linguistic stuff and corresponding sensory perception of substance; space; time; shape and size; position; and force. The paper documents the authority and power students may achieve through making knowledge rather than repeating it. We show the ways in which drawing on the images elicited by others helps to develop physical, sensory, and sometimes affective relations towards the real and concrete world of scientific practice.
Granger causality for state-space models
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Seth, Anil K.
2015-04-01
Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations—commonplace in application domains as diverse as climate science, econometrics, and the neurosciences—induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.
Causal modelling applied to the risk assessment of a wastewater discharge.
Paul, Warren L; Rokahr, Pat A; Webb, Jeff M; Rees, Gavin N; Clune, Tim S
2016-03-01
Bayesian networks (BNs), or causal Bayesian networks, have become quite popular in ecological risk assessment and natural resource management because of their utility as a communication and decision-support tool. Since their development in the field of artificial intelligence in the 1980s, however, Bayesian networks have evolved and merged with structural equation modelling (SEM). Unlike BNs, which are constrained to encode causal knowledge in conditional probability tables, SEMs encode this knowledge in structural equations, which is thought to be a more natural language for expressing causal information. This merger has clarified the causal content of SEMs and generalised the method such that it can now be performed using standard statistical techniques. As it was with BNs, the utility of this new generation of SEM in ecological risk assessment will need to be demonstrated with examples to foster an understanding and acceptance of the method. Here, we applied SEM to the risk assessment of a wastewater discharge to a stream, with a particular focus on the process of translating a causal diagram (conceptual model) into a statistical model which might then be used in the decision-making and evaluation stages of the risk assessment. The process of building and testing a spatial causal model is demonstrated using data from a spatial sampling design, and the implications of the resulting model are discussed in terms of the risk assessment. It is argued that a spatiotemporal causal model would have greater external validity than the spatial model, enabling broader generalisations to be made regarding the impact of a discharge, and greater value as a tool for evaluating the effects of potential treatment plant upgrades. Suggestions are made on how the causal model could be augmented to include temporal as well as spatial information, including suggestions for appropriate statistical models and analyses.
Detectability of Granger causality for subsampled continuous-time neurophysiological processes.
Barnett, Lionel; Seth, Anil K
2017-01-01
Granger causality is well established within the neurosciences for inference of directed functional connectivity from neurophysiological data. These data usually consist of time series which subsample a continuous-time biophysiological process. While it is well known that subsampling can lead to imputation of spurious causal connections where none exist, less is known about the effects of subsampling on the ability to reliably detect causal connections which do exist. We present a theoretical analysis of the effects of subsampling on Granger-causal inference. Neurophysiological processes typically feature signal propagation delays on multiple time scales; accordingly, we base our analysis on a distributed-lag, continuous-time stochastic model, and consider Granger causality in continuous time at finite prediction horizons. Via exact analytical solutions, we identify relationships among sampling frequency, underlying causal time scales and detectability of causalities. We reveal complex interactions between the time scale(s) of neural signal propagation and sampling frequency. We demonstrate that detectability decays exponentially as the sample time interval increases beyond causal delay times, identify detectability "black spots" and "sweet spots", and show that downsampling may potentially improve detectability. We also demonstrate that the invariance of Granger causality under causal, invertible filtering fails at finite prediction horizons, with particular implications for inference of Granger causality from fMRI data. Our analysis emphasises that sampling rates for causal analysis of neurophysiological time series should be informed by domain-specific time scales, and that state-space modelling should be preferred to purely autoregressive modelling. On the basis of a very general model that captures the structure of neurophysiological processes, we are able to help identify confounds, and offer practical insights, for successful detection of causal connectivity from neurophysiological recordings. Copyright © 2016 Elsevier B.V. All rights reserved.
Bollen, Kenneth A
2007-06-01
R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal (formative) indicators rests on several claims: (a) A latent variable exists apart from the model when there are effect (reflective) indicators but not when there are causal (formative) indicators, (b) causal (formative) indicators need not have the same consequences, (c) causal (formative) indicators are inherently subject to interpretational confounding, and (d) a researcher cannot detect interpretational confounding when using causal (formative) indicators. This article shows that each claim is false. Rather, interpretational confounding is more a problem of structural misspecification of a model combined with an underidentified model that leaves these misspecifications undetected. Interpretational confounding does not occur if the model is correctly specified whether a researcher has causal (formative) or effect (reflective) indicators. It is the validity of a model not the type of indicator that determines the potential for interpretational confounding. Copyright 2007 APA, all rights reserved.
The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation
2015-02-20
being integrated within MAT, including Granger causality. Granger causality tests whether a data series helps when predicting future values of another...relations by econometric models and cross-spectral methods. Econometrica: Journal of the Econometric Society, 424-438. Granger, C. W. (1980). Testing ... testing dataset. This effort is described in Section 3.2. 3.1. Improvements in Granger Causality User Interface Various metrics of causality are
The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation
2013-11-20
Granger causality F-test validation 3.1.2. Dynamic time warping for uneven temporal relationships Many causal relationships are imperfectly...mapping for dynamic feedback models Granger causality and DTW can identify causal relationships and consider complex temporal factors. However, many ...variant of the tf-idf algorithm (Manning, Raghavan, Schutze et al., 2008), typically used in search engines, to “score” features. The (-log tf) in
Classical Causal Models for Bell and Kochen-Specker Inequality Violations Require Fine-Tuning
NASA Astrophysics Data System (ADS)
Cavalcanti, Eric G.
2018-04-01
Nonlocality and contextuality are at the root of conceptual puzzles in quantum mechanics, and they are key resources for quantum advantage in information-processing tasks. Bell nonlocality is best understood as the incompatibility between quantum correlations and the classical theory of causality, applied to relativistic causal structure. Contextuality, on the other hand, is on a more controversial foundation. In this work, I provide a common conceptual ground between nonlocality and contextuality as violations of classical causality. First, I show that Bell inequalities can be derived solely from the assumptions of no signaling and no fine-tuning of the causal model. This removes two extra assumptions from a recent result from Wood and Spekkens and, remarkably, does not require any assumption related to independence of measurement settings—unlike all other derivations of Bell inequalities. I then introduce a formalism to represent contextuality scenarios within causal models and show that all classical causal models for violations of a Kochen-Specker inequality require fine-tuning. Thus, the quantum violation of classical causality goes beyond the case of spacelike-separated systems and already manifests in scenarios involving single systems.
National Centers for Environmental Prediction
Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
National Centers for Environmental Prediction
Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration /EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION / DIAGNOSTICS Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
Systems and methods for modeling and analyzing networks
Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W
2013-10-29
The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.
Connections between Graphical Gaussian Models and Factor Analysis
ERIC Educational Resources Information Center
Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.
2010-01-01
Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…
Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models
2016-01-01
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919
Theodoratou, Evropi; Farrington, Susan M.; Tenesa, Albert; Dunlop, Malcolm G.; McKeigue, Paul; Campbell, Harry
2013-01-01
Introduction Vitamin D deficiency has been associated with increased risk of colorectal cancer (CRC), but causal relationship has not yet been confirmed. We investigate the direction of causation between vitamin D and CRC by extending the conventional approaches to allow pleiotropic relationships and by explicitly modelling unmeasured confounders. Methods Plasma 25-hydroxyvitamin D (25-OHD), genetic variants associated with 25-OHD and CRC, and other relevant information was available for 2645 individuals (1057 CRC cases and 1588 controls) and included in the model. We investigate whether 25-OHD is likely to be causally associated with CRC, or vice versa, by selecting the best modelling hypothesis according to Bayesian predictive scores. We examine consistency for a range of prior assumptions. Results Model comparison showed preference for the causal association between low 25-OHD and CRC over the reverse causal hypothesis. This was confirmed for posterior mean deviances obtained for both models (11.5 natural log units in favour of the causal model), and also for deviance information criteria (DIC) computed for a range of prior distributions. Overall, models ignoring hidden confounding or pleiotropy had significantly poorer DIC scores. Conclusion Results suggest causal association between 25-OHD and colorectal cancer, and support the need for randomised clinical trials for further confirmations. PMID:23717431
Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information
NASA Astrophysics Data System (ADS)
Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam
2016-10-01
In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.
Probabilistic Graphical Model Representation in Phylogenetics
Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.
2014-01-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559
A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.
Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L
2016-03-01
Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015 Cognitive Science Society, Inc.
High-throughput Bayesian Network Learning using Heterogeneous Multicore Computers
Linderman, Michael D.; Athalye, Vivek; Meng, Teresa H.; Asadi, Narges Bani; Bruggner, Robert; Nolan, Garry P.
2017-01-01
Aberrant intracellular signaling plays an important role in many diseases. The causal structure of signal transduction networks can be modeled as Bayesian Networks (BNs), and computationally learned from experimental data. However, learning the structure of Bayesian Networks (BNs) is an NP-hard problem that, even with fast heuristics, is too time consuming for large, clinically important networks (20–50 nodes). In this paper, we present a novel graphics processing unit (GPU)-accelerated implementation of a Monte Carlo Markov Chain-based algorithm for learning BNs that is up to 7.5-fold faster than current general-purpose processor (GPP)-based implementations. The GPU-based implementation is just one of several implementations within the larger application, each optimized for a different input or machine configuration. We describe the methodology we use to build an extensible application, assembled from these variants, that can target a broad range of heterogeneous systems, e.g., GPUs, multicore GPPs. Specifically we show how we use the Merge programming model to efficiently integrate, test and intelligently select among the different potential implementations. PMID:28819655
Clinical process analysis and activity-based costing at a heart center.
Ridderstolpe, Lisa; Johansson, Andreas; Skau, Tommy; Rutberg, Hans; Ahlfeldt, Hans
2002-08-01
Cost studies, productivity, efficiency, and quality of care measures, the links between resources and patient outcomes, are fundamental issues for hospital management today. This paper describes the implementation of a model for process analysis and activity-based costing (ABC)/management at a Heart Center in Sweden as a tool for administrative cost information, strategic decision-making, quality improvement, and cost reduction. A commercial software package (QPR) containing two interrelated parts, "ProcessGuide and CostControl," was used. All processes at the Heart Center were mapped and graphically outlined. Processes and activities such as health care procedures, research, and education were identified together with their causal relationship to costs and products/services. The construction of the ABC model in CostControl was time-consuming. However, after the ABC/management system was created, it opened the way for new possibilities including process and activity analysis, simulation, and price calculations. Cost analysis showed large variations in the cost obtained for individual patients undergoing coronary artery bypass grafting (CABG) surgery. We conclude that a process-based costing system is applicable and has the potential to be useful in hospital management.
The display of molecular models with the Ames Interactive Modeling System (AIMS)
NASA Technical Reports Server (NTRS)
Egan, J. T.; Hart, J.; Burt, S. K.; Macelroy, R. D.
1982-01-01
A visualization of molecular models can lead to a clearer understanding of the models. Sophisticated graphics devices supported by minicomputers make it possible for the chemist to interact with the display of a very large model, altering its structure. In addition to user interaction, the need arises also for other ways of displaying information. These include the production of viewgraphs, film presentation, as well as publication quality prints of various models. To satisfy these needs, the display capability of the Ames Interactive Modeling System (AIMS) has been enhanced to provide a wide range of graphics and plotting capabilities. Attention is given to an overview of the AIMS system, graphics hardware used by the AIMS display subsystem, a comparison of graphics hardware, the representation of molecular models, graphics software used by the AIMS display subsystem, the display of a model obtained from data stored in molecule data base, a graphics feature for obtaining single frame permanent copy displays, and a feature for producing multiple frame displays.
Experimental test of nonlocal causality
Ringbauer, Martin; Giarmatzi, Christina; Chaves, Rafael; Costa, Fabio; White, Andrew G.; Fedrizzi, Alessandro
2016-01-01
Explaining observations in terms of causes and effects is central to empirical science. However, correlations between entangled quantum particles seem to defy such an explanation. This implies that some of the fundamental assumptions of causal explanations have to give way. We consider a relaxation of one of these assumptions, Bell’s local causality, by allowing outcome dependence: a direct causal influence between the outcomes of measurements of remote parties. We use interventional data from a photonic experiment to bound the strength of this causal influence in a two-party Bell scenario, and observational data from a Bell-type inequality test for the considered models. Our results demonstrate the incompatibility of quantum mechanics with a broad class of nonlocal causal models, which includes Bell-local models as a special case. Recovering a classical causal picture of quantum correlations thus requires an even more radical modification of our classical notion of cause and effect. PMID:27532045
Experimental test of nonlocal causality.
Ringbauer, Martin; Giarmatzi, Christina; Chaves, Rafael; Costa, Fabio; White, Andrew G; Fedrizzi, Alessandro
2016-08-01
Explaining observations in terms of causes and effects is central to empirical science. However, correlations between entangled quantum particles seem to defy such an explanation. This implies that some of the fundamental assumptions of causal explanations have to give way. We consider a relaxation of one of these assumptions, Bell's local causality, by allowing outcome dependence: a direct causal influence between the outcomes of measurements of remote parties. We use interventional data from a photonic experiment to bound the strength of this causal influence in a two-party Bell scenario, and observational data from a Bell-type inequality test for the considered models. Our results demonstrate the incompatibility of quantum mechanics with a broad class of nonlocal causal models, which includes Bell-local models as a special case. Recovering a classical causal picture of quantum correlations thus requires an even more radical modification of our classical notion of cause and effect.
Sensory Impairments and Autism: A Re-Examination of Causal Modelling
ERIC Educational Resources Information Center
Gerrard, Sue; Rugg, Gordon
2009-01-01
Sensory impairments are widely reported in autism, but remain largely unexplained by existing models. This article examines Kanner's causal reasoning and identifies unsupported assumptions implicit in later empirical work. Our analysis supports a heterogeneous causal model for autistic characteristics. We propose that the development of a…
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faculjak, D.A.
1988-03-01
Graphics Manager (GFXMGR) is menu-driven, user-friendly software designed to interactively create, edit, and delete graphics displays on the Advanced Electronics Design (AED) graphics controller, Model 767. The software runs on the VAX family of computers and has been used successfully in security applications to create and change site layouts (maps) of specific facilities. GFXMGR greatly benefits graphics development by minimizing display-development time, reducing tedium on the part of the user, and improving system performance. It is anticipated that GFXMGR can be used to create graphics displays for many types of applications. 8 figs., 2 tabs.
NASA Technical Reports Server (NTRS)
1974-01-01
The present form of this cardiovascular model simulates both 1-g and zero-g LBNP (lower body negative pressure) experiments and tilt experiments. In addition, the model simulates LBNP experiments at any body angle. The model is currently accessible on the Univac 1110 Time-Shared System in an interactive operational mode. Model output may be in tabular form and/or graphic form. The graphic capabilities are programmed for the Tektronix 4010 graphics terminal and the Univac 1110.
Michalareas, George; Schoffelen, Jan-Mathijs; Paterson, Gavin; Gross, Joachim
2013-01-01
Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. PMID:22328419
Evaluating a Computational Model of Social Causality and Responsibility
2006-01-01
Evaluating a Computational Model of Social Causality and Responsibility Wenji Mao University of Southern California Institute for Creative...empirically evaluate a computa- tional model of social causality and responsibility against human social judgments. Results from our experimental...developed a general computational model of social cau- sality and responsibility [10, 11] that formalizes the factors people use in reasoning about
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1993-01-01
Presents and tests a theoretically derived causal model of the recall of sentences. Notes that the causal model identifies familiarity and concreteness as causes of comprehensibility; familiarity, concreteness, and comprehensibility as causes of interestingness; and all the identified variables as causes of both immediate and delayed recall.…
Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory Theory
ERIC Educational Resources Information Center
Gopnik, Alison; Wellman, Henry M.
2012-01-01
We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework…
Manifest Variable Granger Causality Models for Developmental Research: A Taxonomy
ERIC Educational Resources Information Center
von Eye, Alexander; Wiedermann, Wolfgang
2015-01-01
Granger models are popular when it comes to testing hypotheses that relate series of measures causally to each other. In this article, we propose a taxonomy of Granger causality models. The taxonomy results from crossing the four variables Order of Lag, Type of (Contemporaneous) Effect, Direction of Effect, and Segment of Dependent Series…
Baumrind, D
1983-12-01
The claims based on causal models employing either statistical or experimental controls are examined and found to be excessive when applied to social or behavioral science data. An exemplary case, in which strong causal claims are made on the basis of a weak version of the regularity model of cause, is critiqued. O'Donnell and Clayton claim that in order to establish that marijuana use is a cause of heroin use (their "reformulated stepping-stone" hypothesis), it is necessary and sufficient to demonstrate that marijuana use precedes heroin use and that the statistically significant association between the two does not vanish when the effects of other variables deemed to be prior to both of them are removed. I argue that O'Donnell and Clayton's version of the regularity model is not sufficient to establish cause and that the planning of social interventions both presumes and requires a generative rather than a regularity causal model. Causal modeling using statistical controls is of value when it compels the investigator to make explicit and to justify a causal explanation but not when it is offered as a substitute for a generative analysis of causal connection.
A Program of Continuing Research on Representing, Manipulating, and Reasoning about Physical Objects
1991-09-30
graphics with the goal of automatically converting complex graphics models into forms more appropriate for radiosity computation. 2.4 Least Constraint We...to computer graphics with the goal of automatically 7 converting complex graphics models into forms more appropriate for radiosity com- putation. 8 4
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Engineering Graphics Educational Outcomes for the Global Engineer: An Update
ERIC Educational Resources Information Center
Barr, R. E.
2012-01-01
This paper discusses the formulation of educational outcomes for engineering graphics that span the global enterprise. Results of two repeated faculty surveys indicate that new computer graphics tools and techniques are now the preferred mode of engineering graphical communication. Specifically, 3-D computer modeling, assembly modeling, and model…
Causality analysis in business performance measurement system using system dynamics methodology
NASA Astrophysics Data System (ADS)
Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah
2014-07-01
One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.
The Role of Ambient Ozone in Epidemiologic Studies of Heat-Related Mortality
Snowden, Jonathan M.; Kontgis, Caitlin; Tager, Ira B.
2012-01-01
Background: A large and growing literature investigating the role of extreme heat on mortality has conceptualized the role of ambient ozone in various ways, sometimes treating it as a confounder, sometimes as an effect modifier, and sometimes as a co-exposure. Thus, there is a lack of consensus about the roles that temperature and ozone together play in causing mortality. Objectives: We applied directed acyclic graphs (DAGs) to the topic of heat-related mortality to graphically represent the subject matter behind the research questions and to provide insight on the analytical options available. Discussion: On the basis of the subject matter encoded in the graphs, we assert that the role of ozone in studies of temperature and mortality is a causal intermediate that is affected by temperature and that can also affect mortality, rather than a confounder. Conclusions: We discuss possible questions of interest implied by this causal structure and propose areas of future work to further clarify the role of air pollutants in epidemiologic studies of extreme temperature. PMID:22899622
DYNAMO-HIA–A Dynamic Modeling Tool for Generic Health Impact Assessments
Lhachimi, Stefan K.; Nusselder, Wilma J.; Smit, Henriette A.; van Baal, Pieter; Baili, Paolo; Bennett, Kathleen; Fernández, Esteve; Kulik, Margarete C.; Lobstein, Tim; Pomerleau, Joceline; Mackenbach, Johan P.; Boshuizen, Hendriek C.
2012-01-01
Background Currently, no standard tool is publicly available that allows researchers or policy-makers to quantify the impact of policies using epidemiological evidence within the causal framework of Health Impact Assessment (HIA). A standard tool should comply with three technical criteria (real-life population, dynamic projection, explicit risk-factor states) and three usability criteria (modest data requirements, rich model output, generally accessible) to be useful in the applied setting of HIA. With DYNAMO-HIA (Dynamic Modeling for Health Impact Assessment), we introduce such a generic software tool specifically designed to facilitate quantification in the assessment of the health impacts of policies. Methods and Results DYNAMO-HIA quantifies the impact of user-specified risk-factor changes on multiple diseases and in turn on overall population health, comparing one reference scenario with one or more intervention scenarios. The Markov-based modeling approach allows for explicit risk-factor states and simulation of a real-life population. A built-in parameter estimation module ensures that only standard population-level epidemiological evidence is required, i.e. data on incidence, prevalence, relative risks, and mortality. DYNAMO-HIA provides a rich output of summary measures – e.g. life expectancy and disease-free life expectancy – and detailed data – e.g. prevalences and mortality/survival rates – by age, sex, and risk-factor status over time. DYNAMO-HIA is controlled via a graphical user interface and is publicly available from the internet, ensuring general accessibility. We illustrate the use of DYNAMO-HIA with two example applications: a policy causing an overall increase in alcohol consumption and quantifying the disease-burden of smoking. Conclusion By combining modest data needs with general accessibility and user friendliness within the causal framework of HIA, DYNAMO-HIA is a potential standard tool for health impact assessment based on epidemiologic evidence. PMID:22590491
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Fengbin, E-mail: fblu@amss.ac.cn
This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relationsmore » evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.« less
Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze
2017-01-01
This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor's 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Clinical knee findings in floor layers with focus on meniscal status.
Rytter, Søren; Jensen, Lilli Kirkeskov; Bonde, Jens Peter
2008-10-22
The aim of this study was to examine the prevalence of self-reported and clinical knee morbidity among floor layers compared to a group of graphic designers, with special attention to meniscal status. We obtained information about knee complaints by questionnaire and conducted a bilateral clinical and radiographic knee examination in 134 male floor layers and 120 male graphic designers. After the exclusion of subjects with reports of earlier knee injuries the odds ratio (OR) with 95% confidence intervals (CI) of knee complaints and clinical findings were computed among floor layers compared to graphic designers, using logistic regression. Estimates were adjusted for effects of body mass index, age and knee straining sports. Using radiographic evaluations, we conducted side-specific sensitivity analyses regarding clinical signs of meniscal lesions after the exclusion of participants with tibiofemoral (TF) osteoarthritis (OA). Reports of knee pain (OR = 2.7, 95% CI = 1.5-4.6), pain during stair walking (OR = 2.2, 95% CI = 1.3-3.9) and symptoms of catching of the knee joint (OR = 2.9, 95% CI = 1.4-5.7) were more prevalent among floor layers compared to graphic designers. Additionally, significant more floor layers than graphic designers had clinical signs suggesting possible meniscal lesions: a positive McMurray test (OR = 2.4, 95% CI = 1.1-5.0) and TF joint line tenderness (OR = 5.4, 95% CI = 2.4-12.0). Excluding floor layers (n = 22) and graphic designers (n = 15) with radiographic TF OA did not alter this trend between the two study groups: a positive McMurray test (OR = 2.2, 95% CI = 1.0-4.9), TF joint line tenderness (OR = 5.0, 95% CI = 2.0-12.5). Results indicate that floor layers have a high prevalence of both self-reported and clinical knee morbidity. Clinical knee findings suggesting possible meniscal lesions were significant more prevalent among floor layers compared to a group of low-level exposed graphic designers and an association with occupational kneeling could be possible. However, causality cannot be confirmed due to the cross-sectional study design.
Situation models and memory: the effects of temporal and causal information on recall sequence.
Brownstein, Aaron L; Read, Stephen J
2007-10-01
Participants watched an episode of the television show Cheers on video and then reported free recall. Recall sequence followed the sequence of events in the story; if one concept was observed immediately after another, it was recalled immediately after it. We also made a causal network of the show's story and found that recall sequence followed causal links; effects were recalled immediately after their causes. Recall sequence was more likely to follow causal links than temporal sequence, and most likely to follow causal links that were temporally sequential. Results were similar at 10-minute and 1-week delayed recall. This is the most direct and detailed evidence reported on sequential effects in recall. The causal network also predicted probability of recall; concepts with more links and concepts on the main causal chain were most likely to be recalled. This extends the causal network model to more complex materials than previous research.
Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology
Marshall, Brandon D. L.; Galea, Sandro
2015-01-01
Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821
Lo, Graciete; Tu, Ming; Wu, Olivia; Anglin, Deidre; Saw, Anne; Chen, Fang-pei
2016-01-01
Encounters with Western psychiatric treatment and acculturation may influence causal beliefs of psychiatric illness endorsed by Chinese immigrant relatives, thus affecting help-seeking. We examined causal beliefs held by forty-six Chinese immigrant relatives and found that greater acculturation was associated with an increased number of causal beliefs. Further, as Western psychiatric treatment and acculturation increased, causal models expanded to incorporate biological/physical causes. However, frequency of Chinese immigrant relatives' endorsing spiritual beliefs did not appear to change with acculturation. Clinicians might thus account for spiritual beliefs in treatment even after acculturation increases and biological causal models proliferate. PMID:27127454
Causal learning and inference as a rational process: the new synthesis.
Holyoak, Keith J; Cheng, Patricia W
2011-01-01
Over the past decade, an active line of research within the field of human causal learning and inference has converged on a general representational framework: causal models integrated with bayesian probabilistic inference. We describe this new synthesis, which views causal learning and inference as a fundamentally rational process, and review a sample of the empirical findings that support the causal framework over associative alternatives. Causal events, like all events in the distal world as opposed to our proximal perceptual input, are inherently unobservable. A central assumption of the causal approach is that humans (and potentially nonhuman animals) have been designed in such a way as to infer the most invariant causal relations for achieving their goals based on observed events. In contrast, the associative approach assumes that learners only acquire associations among important observed events, omitting the representation of the distal relations. By incorporating bayesian inference over distributions of causal strength and causal structures, along with noisy-logical (i.e., causal) functions for integrating the influences of multiple causes on a single effect, human judgments about causal strength and structure can be predicted accurately for relatively simple causal structures. Dynamic models of learning based on the causal framework can explain patterns of acquisition observed with serial presentation of contingency data and are consistent with available neuroimaging data. The approach has been extended to a diverse range of inductive tasks, including category-based and analogical inferences.
Causal reasoning with mental models
Khemlani, Sangeet S.; Barbey, Aron K.; Johnson-Laird, Philip N.
2014-01-01
This paper outlines the model-based theory of causal reasoning. It postulates that the core meanings of causal assertions are deterministic and refer to temporally-ordered sets of possibilities: A causes B to occur means that given A, B occurs, whereas A enables B to occur means that given A, it is possible for B to occur. The paper shows how mental models represent such assertions, and how these models underlie deductive, inductive, and abductive reasoning yielding explanations. It reviews evidence both to corroborate the theory and to account for phenomena sometimes taken to be incompatible with it. Finally, it reviews neuroscience evidence indicating that mental models for causal inference are implemented within lateral prefrontal cortex. PMID:25389398
Causal reasoning with mental models.
Khemlani, Sangeet S; Barbey, Aron K; Johnson-Laird, Philip N
2014-01-01
This paper outlines the model-based theory of causal reasoning. It postulates that the core meanings of causal assertions are deterministic and refer to temporally-ordered sets of possibilities: A causes B to occur means that given A, B occurs, whereas A enables B to occur means that given A, it is possible for B to occur. The paper shows how mental models represent such assertions, and how these models underlie deductive, inductive, and abductive reasoning yielding explanations. It reviews evidence both to corroborate the theory and to account for phenomena sometimes taken to be incompatible with it. Finally, it reviews neuroscience evidence indicating that mental models for causal inference are implemented within lateral prefrontal cortex.
Inference in the brain: Statistics flowing in redundant population codes
Pitkow, Xaq; Angelaki, Dora E
2017-01-01
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050
NASA Astrophysics Data System (ADS)
Hargrave, C.; Moores, M.; Deegan, T.; Gibbs, A.; Poulsen, M.; Harden, F.; Mengersen, K.
2014-03-01
A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific subregions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.
2010-09-01
achieved; the causal reasoning involved in understanding diseases such as AIDS, yellow fever, and cholera , and the causal reasoning in understanding a...and malaria, we could start to implement prevention strategies. Once we determined that contaminated water led to cholera , we could impose...sanitation measures to prevent further outbreaks . However, when dealing with indeterminate, multi-causal situations, the picture is not so easy. We may
Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A
2009-06-01
In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.
Teaching Real Business Cycles to Undergraduates
ERIC Educational Resources Information Center
Brevik, Frode; Gartner, Manfred
2007-01-01
The authors review the graphical approach to teaching the real business cycle model introduced in Barro. They then look at where this approach cuts corners and suggest refinements. Finally, they compare graphical and exact models by means of impulse-response functions. The graphical models yield reliable qualitative results. Sizable quantitative…
Graphics modelling of non-contact thickness measuring robotics work cell
NASA Technical Reports Server (NTRS)
Warren, Charles W.
1990-01-01
A system was developed for measuring, in real time, the thickness of a sprayable insulation during its application. The system was graphically modelled, off-line, using a state-of-the-art graphics workstation and associated software. This model was to contain a 3D color model of a workcell containing a robot and an air bearing turntable. A communication link was established between the graphics workstations and the robot's controller. Sequences of robot motion generated by the computer simulation are transmitted to the robot for execution.
A General Approach to Causal Mediation Analysis
ERIC Educational Resources Information Center
Imai, Kosuke; Keele, Luke; Tingley, Dustin
2010-01-01
Traditionally in the social sciences, causal mediation analysis has been formulated, understood, and implemented within the framework of linear structural equation models. We argue and demonstrate that this is problematic for 3 reasons: the lack of a general definition of causal mediation effects independent of a particular statistical model, the…
Causal Agency Theory: Reconceptualizing a Functional Model of Self-Determination
ERIC Educational Resources Information Center
Shogren, Karrie A.; Wehmeyer, Michael L.; Palmer, Susan B.; Forber-Pratt, Anjali J.; Little, Todd J.; Lopez, Shane
2015-01-01
This paper introduces Causal Agency Theory, an extension of the functional model of self-determination. Causal Agency Theory addresses the need for interventions and assessments pertaining to selfdetermination for all students and incorporates the significant advances in understanding of disability and in the field of positive psychology since the…
Inductive reasoning about causally transmitted properties.
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B
2008-11-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.
Nonlinear parametric model for Granger causality of time series
NASA Astrophysics Data System (ADS)
Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano
2006-06-01
The notion of Granger causality between two time series examines if the prediction of one series could be improved by incorporating information of the other. In particular, if the prediction error of the first time series is reduced by including measurements from the second time series, then the second time series is said to have a causal influence on the first one. We propose a radial basis function approach to nonlinear Granger causality. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in two applications. In the first application, a physiological one, we consider time series of heart rate and blood pressure in congestive heart failure patients and patients affected by sepsis: we find that sepsis patients, unlike congestive heart failure patients, show symmetric causal relationships between the two time series. In the second application, we consider the feedback loop in a model of excitatory and inhibitory neurons: we find that in this system causality measures the combined influence of couplings and membrane time constants.
Porta, Alberto; Bassani, Tito; Bari, Vlasta; Pinna, Gian D; Maestri, Roberto; Guzzetti, Stefano
2012-03-01
This study was designed to demonstrate the need of accounting for respiration (R) when causality between heart period (HP) and systolic arterial pressure (SAP) is under scrutiny. Simulations generated according to a bivariate autoregressive closed-loop model were utilized to assess how causality changes as a function of the model parameters. An exogenous (X) signal was added to the bivariate autoregressive closed-loop model to evaluate the bias on causality induced when the X source was disregarded. Causality was assessed in the time domain according to a predictability improvement approach (i.e., Granger causality). HP and SAP variability series were recorded with R in 19 healthy subjects during spontaneous and controlled breathing at 10, 15, and 20 breaths/min. Simulations proved the importance of accounting for X signals. During spontaneous breathing, assessing causality without taking into consideration R leads to a significantly larger percentage of closed-loop interactions and a smaller fraction of unidirectional causality from HP to SAP. This finding was confirmed during paced breathing and it was independent of the breathing rate. These results suggest that the role of baroreflex cannot be correctly assessed without accounting for R.
National Centers for Environmental Prediction
/ VISION | About EMC EMC > NAM > EXPERIMENTAL DATA Home NAM Operational Products HIRESW Operational Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION
Inam, Azhar; Adamowski, Jan; Halbe, Johannes; Prasher, Shiv
2015-04-01
Over the course of the last twenty years, participatory modeling has increasingly been advocated as an integral component of integrated, adaptive, and collaborative water resources management. However, issues of high cost, time, and expertise are significant hurdles to the widespread adoption of participatory modeling in many developing countries. In this study, a step-wise method to initialize the involvement of key stakeholders in the development of qualitative system dynamics models (i.e. causal loop diagrams) is presented. The proposed approach is designed to overcome the challenges of low expertise, time and financial resources that have hampered previous participatory modeling efforts in developing countries. The methodological framework was applied in a case study of soil salinity management in the Rechna Doab region of Pakistan, with a focus on the application of qualitative modeling through stakeholder-built causal loop diagrams to address soil salinity problems in the basin. Individual causal loop diagrams were developed by key stakeholder groups, following which an overall group causal loop diagram of the entire system was built based on the individual causal loop diagrams to form a holistic qualitative model of the whole system. The case study demonstrates the usefulness of the proposed approach, based on using causal loop diagrams in initiating stakeholder involvement in the participatory model building process. In addition, the results point to social-economic aspects of soil salinity that have not been considered by other modeling studies to date. Copyright © 2015 Elsevier Ltd. All rights reserved.
Understanding of Relation Structures of Graphical Models by Lower Secondary Students
ERIC Educational Resources Information Center
van Buuren, Onne; Heck, André; Ellermeijer, Ton
2016-01-01
A learning path has been developed on system dynamical graphical modelling, integrated into the Dutch lower secondary physics curriculum. As part of the developmental research for this learning path, students' understanding of the relation structures shown in the diagrams of graphical system dynamics based models has been investigated. One of our…
Parametric inference for biological sequence analysis.
Pachter, Lior; Sturmfels, Bernd
2004-11-16
One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.
Causal Discovery of Dynamic Systems
ERIC Educational Resources Information Center
Voortman, Mark
2010-01-01
Recently, several philosophical and computational approaches to causality have used an interventionist framework to clarify the concept of causality [Spirtes et al., 2000, Pearl, 2000, Woodward, 2005]. The characteristic feature of the interventionist approach is that causal models are potentially useful in predicting the effects of manipulations.…
Markland, D; Hardy, L
1997-03-01
The Intrinsic Motivation Inventory (IMI) has been gaining acceptance in the sport and exercise domain since the publication of research by McAuley, Duncan, and Tammen (1989) and McAuley, Wraith, and Duncan (1991), which reported confirmatory support for the factorial validity of a hierarchical model of intrinsic motivation. Authors of the present study argue that the results of these studies did not conclusively support the hierarchical model and that the model did not accurately reflect the tenets of cognitive evaluation theory (Deci & Ryan, 1985) from which the IMI is drawn. It is also argued that a measure of perceived locus of causality is required to model intrinsic motivation properly. The development of a perceived locus of causality for exercise scale is described, and alternative models, in which perceived competence and perceived locus of causality are held to have causal influences on intrinsic motivation, are compared with an oblique confirmatory factor analytic model in which the constructs are held at the same conceptual level. Structural equation modeling showed support for a causal model in which perceived locus of causality mediates the effects of perceived competence on pressure-tension, interest-enjoyment, and effort-importance. It is argued that conceptual and operational problems with the IMI, as currently used, should be addressed before it becomes established as the instrument of choice for assessing levels of intrinsic motivation.
Estimators for Clustered Education RCTs Using the Neyman Model for Causal Inference
ERIC Educational Resources Information Center
Schochet, Peter Z.
2013-01-01
This article examines the estimation of two-stage clustered designs for education randomized control trials (RCTs) using the nonparametric Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for…
Compact Representations of Extended Causal Models
ERIC Educational Resources Information Center
Halpern, Joseph Y.; Hitchcock, Christopher
2013-01-01
Judea Pearl (2000) was the first to propose a definition of actual causation using causal models. A number of authors have suggested that an adequate account of actual causation must appeal not only to causal structure but also to considerations of "normality." In Halpern and Hitchcock (2011), we offer a definition of actual causation…
A Graphical Analysis of the Cournot-Nash and Stackelberg Models.
ERIC Educational Resources Information Center
Fulton, Murray
1997-01-01
Shows how the Cournot-Nash and Stackelberg equilibria can be represented in the familiar supply-demand graphical framework, allowing a direct comparison with the monopoly, competitive, and industrial organization models. This graphical analysis is represented throughout the article. (MJP)
Structural Equations and Causal Explanations: Some Challenges for Causal SEM
ERIC Educational Resources Information Center
Markus, Keith A.
2010-01-01
One common application of structural equation modeling (SEM) involves expressing and empirically investigating causal explanations. Nonetheless, several aspects of causal explanation that have an impact on behavioral science methodology remain poorly understood. It remains unclear whether applications of SEM should attempt to provide complete…
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Paradoxical Behavior of Granger Causality
NASA Astrophysics Data System (ADS)
Witt, Annette; Battaglia, Demian; Gail, Alexander
2013-03-01
Granger causality is a standard tool for the description of directed interaction of network components and is popular in many scientific fields including econometrics, neuroscience and climate science. For time series that can be modeled as bivariate auto-regressive processes we analytically derive an expression for spectrally decomposed Granger Causality (SDGC) and show that this quantity depends only on two out of four groups of model parameters. Then we present examples of such processes whose SDGC expose paradoxical behavior in the sense that causality is high for frequency ranges with low spectral power. For avoiding misinterpretations of Granger causality analysis we propose to complement it by partial spectral analysis. Our findings are illustrated by an example from brain electrophysiology. Finally, we draw implications for the conventional definition of Granger causality. Bernstein Center for Computational Neuroscience Goettingen
Health and Wealth of Elderly Couples: Causality Tests Using Dynamic Panel Data Models*
Michaud, Pierre-Carl; van Soest, Arthur
2010-01-01
A positive relationship between socio-economic status (SES) and health, the “health-wealth gradient”, is repeatedly found in many industrialized countries. This study analyzes competing explanations for this gradient: causal effects from health to wealth (health causation) and causal effects from wealth to health (wealth or social causation). Using six biennial waves of couples aged 51–61 in 1992 from the U.S. Health and Retirement Study, we test for causality in panel data models incorporating unobserved heterogeneity and a lag structure supported by specification tests. In contrast to tests relying on models with only first order lags or without unobserved heterogeneity, these tests provide no evidence of causal wealth health effects. On the other hand, we find strong evidence of causal effects from both spouses’ health on household wealth. We also find an effect of the husband’s health on the wife’s mental health, but no other effects from one spouse’s health to health of the other spouse. PMID:18513809
Expectations and Interpretations during Causal Learning
ERIC Educational Resources Information Center
Luhmann, Christian C.; Ahn, Woo-kyoung
2011-01-01
In existing models of causal induction, 4 types of covariation information (i.e., presence/absence of an event followed by presence/absence of another event) always exert identical influences on causal strength judgments (e.g., joint presence of events always suggests a generative causal relationship). In contrast, we suggest that, due to…
Representing Personal Determinants in Causal Structures.
ERIC Educational Resources Information Center
Bandura, Albert
1984-01-01
Responds to Staddon's critique of the author's earlier article and addresses issues raised by Staddon's (1984) alternative models of causality. The author argues that it is not the formalizability of causal processes that is the issue but whether cognitive determinants of behavior are reducible to past stimulus inputs in causal structures.…
Structural equation modeling: building and evaluating causal models: Chapter 8
Grace, James B.; Scheiner, Samuel M.; Schoolmaster, Donald R.
2015-01-01
Scientists frequently wish to study hypotheses about causal relationships, rather than just statistical associations. This chapter addresses the question of how scientists might approach this ambitious task. Here we describe structural equation modeling (SEM), a general modeling framework for the study of causal hypotheses. Our goals are to (a) concisely describe the methodology, (b) illustrate its utility for investigating ecological systems, and (c) provide guidance for its application. Throughout our presentation, we rely on a study of the effects of human activities on wetland ecosystems to make our description of methodology more tangible. We begin by presenting the fundamental principles of SEM, including both its distinguishing characteristics and the requirements for modeling hypotheses about causal networks. We then illustrate SEM procedures and offer guidelines for conducting SEM analyses. Our focus in this presentation is on basic modeling objectives and core techniques. Pointers to additional modeling options are also given.
Propensity score method: a non-parametric technique to reduce model dependence
2017-01-01
Propensity score analysis (PSA) is a powerful technique that it balances pretreatment covariates, making the causal effect inference from observational data as reliable as possible. The use of PSA in medical literature has increased exponentially in recent years, and the trend continue to rise. The article introduces rationales behind PSA, followed by illustrating how to perform PSA in R with MatchIt package. There are a variety of methods available for PS matching such as nearest neighbors, full matching, exact matching and genetic matching. The task can be easily done by simply assigning a string value to the method argument in the matchit() function. The generic summary() and plot() functions can be applied to an object of class matchit to check covariate balance after matching. Furthermore, there is a useful package PSAgraphics that contains several graphical functions to check covariate balance between treatment groups across strata. If covariate balance is not achieved, one can modify model specifications or use other techniques such as random forest and recursive partitioning to better represent the underlying structure between pretreatment covariates and treatment assignment. The process can be repeated until the desirable covariate balance is achieved. PMID:28164092
Data Analysis with Graphical Models: Software Tools
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1994-01-01
Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
Causal modeling in international migration research: a methodological prolegomenon.
Papademetriou, D G; Hopple, G W
1982-10-01
The authors examine the value of using models to study the migration process. In particular, they demonstrate the potential utility of a partial least squares modeling approach to the causal analysis of international migration.
Interactive graphic editing tools in bioluminescent imaging simulation
NASA Astrophysics Data System (ADS)
Li, Hui; Tian, Jie; Luo, Jie; Wang, Ge; Cong, Wenxiang
2005-04-01
It is a challenging task to accurately describe complicated biological tissues and bioluminescent sources in bioluminescent imaging simulation. Several graphic editing tools have been developed to efficiently model each part of the bioluminescent simulation environment and to interactively correct or improve the initial models of anatomical structures or bioluminescent sources. There are two major types of graphic editing tools: non-interactive tools and interactive tools. Geometric building blocks (i.e. regular geometric graphics and superquadrics) are applied as non-interactive tools. To a certain extent, complicated anatomical structures and bioluminescent sources can be approximately modeled by combining a sufficient large number of geometric building blocks with Boolean operators. However, those models are too simple to describe the local features and fine changes in 2D/3D irregular contours. Therefore, interactive graphic editing tools have been developed to facilitate the local modifications of any initial surface model. With initial models composed of geometric building blocks, interactive spline mode is applied to conveniently perform dragging and compressing operations on 2D/3D local surface of biological tissues and bioluminescent sources inside the region/volume of interest. Several applications of the interactive graphic editing tools will be presented in this article.
SIGMA--A Graphical Approach to Teaching Simulation.
ERIC Educational Resources Information Center
Schruben, Lee W.
1992-01-01
SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…
White, Peter A
2009-06-01
Contingency information is information about empirical associations between possible causes and outcomes. In the present research, it is shown that, under some circumstances, there is a tendency for negative contingencies to lead to positive causal judgments and for positive contingencies to lead to negative causal judgments. If there is a high proportion of instances in which a candidate cause (CC) being judged is present, these tendencies are predicted by weighted averaging models of causal judgment. If the proportion of such instances is low, the predictions of weighted averaging models break down. It is argued that one of the main aims of causal judgment is to account for occurrences of the outcome. Thus, a CC is not given a high causal judgment if there are few or no occurrences of it, regardless of the objective contingency. This argument predicts that, if there is a low proportion of instances in which a CC is present, causal judgments are determined mainly by the number of Cell A instances (i.e., CC present, outcome occurs), and that this explains why weighted averaging models fail to predict judgmental tendencies under these circumstances. Experimental results support this argument.
Formalizing the role of agent-based modeling in causal inference and epidemiology.
Marshall, Brandon D L; Galea, Sandro
2015-01-15
Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Dynamic Granger-Geweke causality modeling with application to interictal spike propagation
Lin, Fa-Hsuan; Hara, Keiko; Solo, Victor; Vangel, Mark; Belliveau, John W.; Stufflebeam, Steven M.; Hamalainen, Matti S.
2010-01-01
A persistent problem in developing plausible neurophysiological models of perception, cognition, and action is the difficulty of characterizing the interactions between different neural systems. Previous studies have approached this problem by estimating causal influences across brain areas activated during cognitive processing using Structural Equation Modeling and, more recently, with Granger-Geweke causality. While SEM is complicated by the need for a priori directional connectivity information, the temporal resolution of dynamic Granger-Geweke estimates is limited because the underlying autoregressive (AR) models assume stationarity over the period of analysis. We have developed a novel optimal method for obtaining data-driven directional causality estimates with high temporal resolution in both time and frequency domains. This is achieved by simultaneously optimizing the length of the analysis window and the chosen AR model order using the SURE criterion. Dynamic Granger-Geweke causality in time and frequency domains is subsequently calculated within a moving analysis window. We tested our algorithm by calculating the Granger-Geweke causality of epileptic spike propagation from the right frontal lobe to the left frontal lobe. The results quantitatively suggested the epileptic activity at the left frontal lobe was propagated from the right frontal lobe, in agreement with the clinical diagnosis. Our novel computational tool can be used to help elucidate complex directional interactions in the human brain. PMID:19378280
Normative and descriptive accounts of the influence of power and contingency on causal judgement.
Perales, José C; Shanks, David R
2003-08-01
The power PC theory (Cheng, 1997) is a normative account of causal inference, which predicts that causal judgements are based on the power p of a potential cause, where p is the cause-effect contingency normalized by the base rate of the effect. In three experiments we demonstrate that both cause-effect contingency and effect base-rate independently affect estimates in causal learning tasks. In Experiment 1, causal strength judgements were directly related to power p in a task in which the effect base-rate was manipulated across two positive and two negative contingency conditions. In Experiments 2 and 3 contingency manipulations affected causal estimates in several situations in which power p was held constant, contrary to the power PC theory's predictions. This latter effect cannot be explained by participants' conflation of reliability and causal strength, as Experiment 3 demonstrated independence of causal judgements and confidence. From a descriptive point of view, the data are compatible with Pearce's (1987) model, as well as with several other judgement rules, but not with the Rescorla-Wagner (Rescorla & Wagner, 1972) or power PC models.
A Technique for Showing Causal Arguments in Accident Reports
NASA Technical Reports Server (NTRS)
Holloway, C. M.; Johnson, C. W.
2005-01-01
In the prototypical accident report, specific findings, particularly those related to causes and contributing factors, are usually written out explicitly and clearly. Also, the evidence upon which these findings are based is typically explained in detail. Often lacking, however, is any explicit discussion, description, or depiction of the arguments that connect the findings and the evidence. That is, the reports do not make clear why the investigators believe that the specific evidence they found necessarily leads to the particular findings they enumerated. This paper shows how graphical techniques can be used to depict relevant arguments supporting alternate positions on the causes of a complex road-traffic accident.
Causal Indicators Can Help to Interpret Factors
ERIC Educational Resources Information Center
Bentler, Peter M.
2016-01-01
The latent factor in a causal indicator model is no more than the latent factor of the factor part of the model. However, if the causal indicator variables are well-understood and help to improve the prediction of individuals' factor scores, they can help to interpret the meaning of the latent factor. Aguirre-Urreta, Rönkkö, and Marakas (2016)…
A Causal Inference Analysis of the Effect of Wildland Fire ...
Wildfire smoke is a major contributor to ambient air pollution levels. In this talk, we develop a spatio-temporal model to estimate the contribution of fire smoke to overall air pollution in different regions of the country. We combine numerical model output with observational data within a causal inference framework. Our methods account for aggregation and potential bias of the numerical model simulation, and address uncertainty in the causal estimates. We apply the proposed method to estimation of ozone and fine particulate matter from wildland fires and the impact on health burden assessment. We develop a causal inference framework to assess contributions of fire to ambient PM in the presence of spatial interference.
A Guide to the Literature on Learning Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Friedland, Peter (Technical Monitor)
1994-01-01
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and more generally, learning probabilistic graphical models. Because many problems in artificial intelligence, statistics and neural networks can be represented as a probabilistic graphical model, this area provides a unifying perspective on learning. This paper organizes the research in this area along methodological lines of increasing complexity.
Robust Gaussian Graphical Modeling via l1 Penalization
Sun, Hokeun; Li, Hongzhe
2012-01-01
Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775
Hu, Sanqing; Dai, Guojun; Worrell, Gregory A.; Dai, Qionghai; Liang, Hualou
2012-01-01
Granger causality (GC) is one of the most popular measures to reveal causality influence of time series and has been widely applied in economics and neuroscience. Especially, its counterpart in frequency domain, spectral GC, as well as other Granger-like causality measures have recently been applied to study causal interactions between brain areas in different frequency ranges during cognitive and perceptual tasks. In this paper, we show that: 1) GC in time domain cannot correctly determine how strongly one time series influences the other when there is directional causality between two time series, and 2) spectral GC and other Granger-like causality measures have inherent shortcomings and/or limitations because of the use of the transfer function (or its inverse matrix) and partial information of the linear regression model. On the other hand, we propose two novel causality measures (in time and frequency domains) for the linear regression model, called new causality and new spectral causality, respectively, which are more reasonable and understandable than GC or Granger-like measures. Especially, from one simple example, we point out that, in time domain, both new causality and GC adopt the concept of proportion, but they are defined on two different equations where one equation (for GC) is only part of the other (for new causality), thus the new causality is a natural extension of GC and has a sound conceptual/theoretical basis, and GC is not the desired causal influence at all. By several examples, we confirm that new causality measures have distinct advantages over GC or Granger-like measures. Finally, we conduct event-related potential causality analysis for a subject with intracranial depth electrodes undergoing evaluation for epilepsy surgery, and show that, in the frequency domain, all measures reveal significant directional event-related causality, but the result from new spectral causality is consistent with event-related time–frequency power spectrum activity. The spectral GC as well as other Granger-like measures are shown to generate misleading results. The proposed new causality measures may have wide potential applications in economics and neuroscience. PMID:21511564
Hybrid automated reliability predictor integrated work station (HiREL)
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1991-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.
Omission of Causal Indicators: Consequences and Implications for Measurement
ERIC Educational Resources Information Center
Aguirre-Urreta, Miguel I.; Rönkkö, Mikko; Marakas, George M.
2016-01-01
One of the central assumptions of the causal-indicator literature is that all causal indicators must be included in the research model and that the exclusion of one or more relevant causal indicators would have severe negative consequences by altering the meaning of the latent variable. In this research we show that the omission of a relevant…
How to Be Causal: Time, Spacetime and Spectra
ERIC Educational Resources Information Center
Kinsler, Paul
2011-01-01
I explain a simple definition of causality in widespread use, and indicate how it links to the Kramers-Kronig relations. The specification of causality in terms of temporal differential equations then shows us the way to write down dynamical models so that their causal nature "in the sense used here" should be obvious to all. To extend existing…
2008-01-01
The causal feedback implied by urban neighborhood conditions that shape human health experiences, that in turn shape neighborhood conditions through a complex causal web, raises a challenge for traditional epidemiological causal analyses. This article introduces the loop analysis method, and builds off of a core loop model linking neighborhood property vacancy rate, resident depressive symptoms, rate of neighborhood death, and rate of neighborhood exit in a feedback network. I justify and apply loop analysis to the specific example of depressive symptoms and abandoned urban residential property to show how inquiries into the behavior of causal systems can answer different kinds of hypotheses, and thereby compliment those of causal modeling using statistical models. Neighborhood physical conditions that are only indirectly influenced by depressive symptoms may nevertheless manifest in the mental health experiences of their residents; conversely, neighborhood physical conditions may be a significant mental health risk for the population of neighborhood residents. I find that participatory greenspace programs are likely to produce adaptive responses in depressive symptoms and different neighborhood conditions, which are different in character to non-participatory greenspace interventions. PMID:17706851
A Quantum Probability Model of Causal Reasoning
Trueblood, Jennifer S.; Busemeyer, Jerome R.
2012-01-01
People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment. PMID:22593747
Reflections on Heckman and Pinto’s Causal Analysis After Haavelmo
2013-11-01
Econometric Analysis , Cambridge University Press, 477–490, 1995. Halpern, J. (1998). Axiomatizing causal reasoning. In Uncertainty in Artificial...Models, Structural Models and Econometric Policy Evaluation. Elsevier B.V., Amsterdam, 4779–4874. Heckman, J. J. (1979). Sample selection bias as a...Reflections on Heckman and Pinto’s “Causal Analysis After Haavelmo” Judea Pearl University of California, Los Angeles Computer Science Department Los
The development of causal reasoning.
Kuhn, Deanna
2012-05-01
How do inference rules for causal learning themselves change developmentally? A model of the development of causal reasoning must address this question, as well as specify the inference rules. Here, the evidence for developmental changes in processes of causal reasoning is reviewed, with the distinction made between diagnostic causal inference and causal prediction. Also addressed is the paradox of a causal reasoning literature that highlights the competencies of young children and the proneness to error among adults. WIREs Cogn Sci 2012, 3:327-335. doi: 10.1002/wcs.1160 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.
Granger-causality maps of diffusion processes.
Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A
2016-02-01
Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.
Tumor Secreted Autocrine Motility Factor (AMF): Causal Role in an Animal Model of Cachexia
2005-08-01
AD Award Number: DAMD17-02-1-0586 TITLE: Tumor Secreted Autocrine Motility Factor ( AMF ): Causal Role in an Animal Model of Cachexia PRINCIPAL...5a. CONTRACT NUMBER Tumor Secreted Autocrine Motility Factor ( AMF ): Causal Role in an Animal Model of Cachexia 5b. GRANT NUMBER DAM D1 7-02-1-0586 5c...quality of life and postpone mortality. We proposed that autocrine motility factor ( AMF ) is released into the bloodstream from cancer sites and
An interactive graphics system to facilitate finite element structural analysis
NASA Technical Reports Server (NTRS)
Burk, R. C.; Held, F. H.
1973-01-01
The characteristics of an interactive graphics systems to facilitate the finite element method of structural analysis are described. The finite element model analysis consists of three phases: (1) preprocessing (model generation), (2) problem solution, and (3) postprocessing (interpretation of results). The advantages of interactive graphics to finite element structural analysis are defined.
User-Extensible Graphics Using Abstract Structure,
1987-08-01
Flex 6 The Algol68 model of the graphical abstract structure 5 The creation of a PictureDefinition 6 The making of a picture from a PictureDefinition 7...data together with the operations that can be performed on that data. i 7! ś I _ § 4, The Alqol68 model of the graphical abstract structure Every
NASA Astrophysics Data System (ADS)
Vacik, Harald; Huber, Patrick; Hujala, Teppo; Kurtilla, Mikko; Wolfslehner, Bernhard
2015-04-01
It is an integral element of the European understanding of sustainable forest management to foster the design and marketing of forest products, non-wood forest products (NWFPs) and services that go beyond the production of timber. Despite the relevance of NWFPs in Europe, forest management and planning methods have been traditionally tailored towards wood and wood products, because most forest management models and silviculture techniques were developed to ensure a sustained production of timber. Although several approaches exist which explicitly consider NWFPs as management objectives in forest planning, specific models are needed for the assessment of their production potential in different environmental contexts and for different management regimes. Empirical data supporting a comprehensive assessment of the potential of NWFPs are rare, thus making development of statistical models particularly problematic. However, the complex causal relationships between the sustained production of NWFPs, the available ecological resources, as well as the organizational and the market potential of forest management regimes are well suited for knowledge-based expert models. Bayesian belief networks (BBNs) are a kind of probabilistic graphical model that have become very popular to practitioners and scientists mainly due to the powerful probability theory involved, which makes BBNs suitable to deal with a wide range of environmental problems. In this contribution we present the development of a Bayesian belief network to assess the potential of NWFPs for small scale forest owners. A three stage iterative process with stakeholder and expert participation was used to develop the Bayesian Network within the frame of the StarTree Project. The group of participants varied in the stages of the modelling process. A core team, consisting of one technical expert and two domain experts was responsible for the entire modelling process as well as for the first prototype of the network structure, including nodes and relationships. A top-level causal network, was further decomposed to sub level networks. Stakeholder participation including a group of experts from different related subject areas was used in model verification and validation. We demonstrate that BBNs can be used to transfer expert knowledge from science to practice and thus have the ability to contribute to improved problem understanding of non-expert decision makers for a sustainable production of NWFPs.
Writing a Scientific Paper II. Communication by Graphics
NASA Astrophysics Data System (ADS)
Sterken, C.
2011-07-01
This paper discusses facets of visual communication by way of images, graphs, diagrams and tabular material. Design types and elements of graphical images are presented, along with advice on how to create graphs, and on how to read graphical illustrations. This is done in astronomical context, using case studies and historical examples of good and bad graphics. Design types of graphs (scatter and vector plots, histograms, pie charts, ternary diagrams and three-dimensional surface graphs) are explicated, as well as the major components of graphical images (axes, legends, textual parts, etc.). The basic features of computer graphics (image resolution, vector images, bitmaps, graphical file formats and file conversions) are explained, as well as concepts of color models and of color spaces (with emphasis on aspects of readability of color graphics by viewers suffering from color-vision deficiencies). Special attention is given to the verity of graphical content, and to misrepresentations and errors in graphics and associated basic statistics. Dangers of dot joining and curve fitting are discussed, with emphasis on the perception of linearity, the issue of nonsense correlations, and the handling of outliers. Finally, the distinction between data, fits and models is illustrated.
Causal inference with missing exposure information: Methods and applications to an obstetric study.
Zhang, Zhiwei; Liu, Wei; Zhang, Bo; Tang, Li; Zhang, Jun
2016-10-01
Causal inference in observational studies is frequently challenged by the occurrence of missing data, in addition to confounding. Motivated by the Consortium on Safe Labor, a large observational study of obstetric labor practice and birth outcomes, this article focuses on the problem of missing exposure information in a causal analysis of observational data. This problem can be approached from different angles (i.e. missing covariates and causal inference), and useful methods can be obtained by drawing upon the available techniques and insights in both areas. In this article, we describe and compare a collection of methods based on different modeling assumptions, under standard assumptions for missing data (i.e. missing-at-random and positivity) and for causal inference with complete data (i.e. no unmeasured confounding and another positivity assumption). These methods involve three models: one for treatment assignment, one for the dependence of outcome on treatment and covariates, and one for the missing data mechanism. In general, consistent estimation of causal quantities requires correct specification of at least two of the three models, although there may be some flexibility as to which two models need to be correct. Such flexibility is afforded by doubly robust estimators adapted from the missing covariates literature and the literature on causal inference with complete data, and by a newly developed triply robust estimator that is consistent if any two of the three models are correct. The methods are applied to the Consortium on Safe Labor data and compared in a simulation study mimicking the Consortium on Safe Labor. © The Author(s) 2013.
Meng, Xiang-He; Shen, Hui; Chen, Xiang-Ding; Xiao, Hong-Mei; Deng, Hong-Wen
2018-03-01
Genome-wide association studies (GWAS) have successfully identified numerous genetic variants associated with diverse complex phenotypes and diseases, and provided tremendous opportunities for further analyses using summary association statistics. Recently, Pickrell et al. developed a robust method for causal inference using independent putative causal SNPs. However, this method may fail to infer the causal relationship between two phenotypes when only a limited number of independent putative causal SNPs identified. Here, we extended Pickrell's method to make it more applicable for the general situations. We extended the causal inference method by replacing the putative causal SNPs with the lead SNPs (the set of the most significant SNPs in each independent locus) and tested the performance of our extended method using both simulation and empirical data. Simulations suggested that when the same number of genetic variants is used, our extended method had similar distribution of test statistic under the null model as well as comparable power under the causal model compared with the original method by Pickrell et al. But in practice, our extended method would generally be more powerful because the number of independent lead SNPs was often larger than the number of independent putative causal SNPs. And including more SNPs, on the other hand, would not cause more false positives. By applying our extended method to summary statistics from GWAS for blood metabolites and femoral neck bone mineral density (FN-BMD), we successfully identified ten blood metabolites that may causally influence FN-BMD. We extended a causal inference method for inferring putative causal relationship between two phenotypes using summary statistics from GWAS, and identified a number of potential causal metabolites for FN-BMD, which may provide novel insights into the pathophysiological mechanisms underlying osteoporosis.
Buchsbaum, Daphna; Seiver, Elizabeth; Bridgers, Sophie; Gopnik, Alison
2012-01-01
A major challenge children face is uncovering the causal structure of the world around them. Previous research on children's causal inference has demonstrated their ability to learn about causal relationships in the physical environment using probabilistic evidence. However, children must also learn about causal relationships in the social environment, including discovering the causes of other people's behavior, and understanding the causal relationships between others' goal-directed actions and the outcomes of those actions. In this chapter, we argue that social reasoning and causal reasoning are deeply linked, both in the real world and in children's minds. Children use both types of information together and in fact reason about both physical and social causation in fundamentally similar ways. We suggest that children jointly construct and update causal theories about their social and physical environment and that this process is best captured by probabilistic models of cognition. We first present studies showing that adults are able to jointly infer causal structure and human action structure from videos of unsegmented human motion. Next, we describe how children use social information to make inferences about physical causes. We show that the pedagogical nature of a demonstrator influences children's choices of which actions to imitate from within a causal sequence and that this social information interacts with statistical causal evidence. We then discuss how children combine evidence from an informant's testimony and expressed confidence with evidence from their own causal observations to infer the efficacy of different potential causes. We also discuss how children use these same causal observations to make inferences about the knowledge state of the social informant. Finally, we suggest that psychological causation and attribution are part of the same causal system as physical causation. We present evidence that just as children use covariation between physical causes and their effects to learn physical causal relationships, they also use covaration between people's actions and the environment to make inferences about the causes of human behavior.
Causal Model of Stress and Coping: Women in Management.
ERIC Educational Resources Information Center
Long, Bonita C.; And Others
1992-01-01
Tested model of managerial women's (n=249) stress. Model was developed from Lazarus's theoretical framework of stress/coping and incorporated causal antecedent constructs (demographics, sex role attitudes, agentic traits), mediating constructs (environment, appraisals, engagement coping, disengagement coping), and outcomes (work performance,…
Fertility and Female Employment: Problems of Causal Direction.
ERIC Educational Resources Information Center
Cramer, James C.
1980-01-01
Considers multicollinearity in nonrecursive models, misspecification of models, discrepancies between attitudes and behavior, and differences between static and dynamic models as explanations for contradictory information on the causal relationship between fertility and female employment. Finds that initially fertility affects employment but that,…
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Squeezing and its graphical representations in the anharmonic oscillator model
NASA Astrophysics Data System (ADS)
Tanaś, R.; Miranowicz, A.; Kielich, S.
1991-04-01
The problem of squeezing and its graphical representations in the anharmonic oscillator model is considered. Explicit formulas for squeezing, principal squeezing, and the quasiprobability distribution (QPD) function are given and illustrated graphically. Approximate analytical formulas for the variances, extremal variances, and QPD are obtained for the case of small nonlinearities and large numbers of photons. The possibility of almost perfect squeezing in the model is demonstrated and its graphical representations in the form of variance lemniscates and QPD contours are plotted. For large numbers of photons the crescent shape of the QPD contours is hardly visible and quite regular ellipses are obtained.
The Concurrent Engineering Design Paradigm Is Now Fully Functional for Graphics Education
ERIC Educational Resources Information Center
Krueger, Thomas J.; Barr, Ronald E.
2007-01-01
Engineering design graphics education has come a long way in the past two decades. The emergence of solid geometric modeling technology has become the focal point for the graphical development of engineering design ideas. The main attraction of this 3-D modeling approach is the downstream application of the data base to analysis and…
n-dimensional isotropic Finch-Skea stars
NASA Astrophysics Data System (ADS)
Chilambwe, Brian; Hansraj, Sudan
2015-02-01
We study the impact of dimension on the physical properties of the Finch-Skea astrophysical model. It is shown that a positive definite, monotonically decreasing pressure and density are evident. A decrease in stellar radius emerges as the order of the dimension increases. This is accompanied by a corresponding increase in energy density. The model continues to display the necessary qualitative features inherent in the 4-dimensional Finch-Skea star and the conformity to the Walecka theory is preserved under dimensional increase. The causality condition is always satisfied for all dimensions considered resulting in the proposed models demonstrating a subluminal sound speed throughout the interior of the distribution. Moreover, the pressure and density decrease monotonically outwards from the centre and a pressure-free hypersurface exists demarcating the boundary of the perfect-fluid sphere. Since the study of the physical conditions is performed graphically, it is necessary to specify certain constants in the model. Reasonable values for such constants are arrived at on examining the behaviour of the model at the centre and demanding the satisfaction of all elementary conditions for physical plausibility. Finally two constants of integration are settled on matching of our solutions with the appropriate Schwarzschild-Tangherlini exterior metrics. Furthermore, the solution admits a barotropic equation of state despite the higher dimension. The compactification parameter as well as the density variation parameter are also computed. The models satisfy the weak, strong and dominant energy conditions in the interior of the stellar configuration.
Confounding factors in determining causal soil moisture-precipitation feedback
NASA Astrophysics Data System (ADS)
Tuttle, Samuel E.; Salvucci, Guido D.
2017-07-01
Identification of causal links in the land-atmosphere system is important for construction and testing of land surface and general circulation models. However, the land and atmosphere are highly coupled and linked by a vast number of complex, interdependent processes. Statistical methods, such as Granger causality, can help to identify feedbacks from observational data, independent of the different parameterizations of physical processes and spatiotemporal resolution effects that influence feedbacks in models. However, statistical causal identification methods can easily be misapplied, leading to erroneous conclusions about feedback strength and sign. Here, we discuss three factors that must be accounted for in determination of causal soil moisture-precipitation feedback in observations and model output: seasonal and interannual variability, precipitation persistence, and endogeneity. The effect of neglecting these factors is demonstrated in simulated and observational data. The results show that long-timescale variability and precipitation persistence can have a substantial effect on detected soil moisture-precipitation feedback strength, while endogeneity has a smaller effect that is often masked by measurement error and thus is more likely to be an issue when analyzing model data or highly accurate observational data.
A general graphical user interface for automatic reliability modeling
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1991-01-01
Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.
Lefèvre, Thomas; Lepresle, Aude; Chariot, Patrick
2015-09-01
The search for complex, nonlinear relationships and causality in data is hindered by the availability of techniques in many domains, including forensic science. Linear multivariable techniques are useful but present some shortcomings. In the past decade, Bayesian approaches have been introduced in forensic science. To date, authors have mainly focused on providing an alternative to classical techniques for quantifying effects and dealing with uncertainty. Causal networks, including Bayesian networks, can help detangle complex relationships in data. A Bayesian network estimates the joint probability distribution of data and graphically displays dependencies between variables and the circulation of information between these variables. In this study, we illustrate the interest in utilizing Bayesian networks for dealing with complex data through an application in clinical forensic science. Evaluating the functional impairment of assault survivors is a complex task for which few determinants are known. As routinely estimated in France, the duration of this impairment can be quantified by days of 'Total Incapacity to Work' ('Incapacité totale de travail,' ITT). In this study, we used a Bayesian network approach to identify the injury type, victim category and time to evaluation as the main determinants of the 'Total Incapacity to Work' (TIW). We computed the conditional probabilities associated with the TIW node and its parents. We compared this approach with a multivariable analysis, and the results of both techniques were converging. Thus, Bayesian networks should be considered a reliable means to detangle complex relationships in data.
Learning Layouts for Single-Page Graphic Designs.
O'Donovan, Peter; Agarwala, Aseem; Hertzmann, Aaron
2014-08-01
This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs slightly better than novice designers.
Faes, Luca; Nollo, Giandomenico
2010-11-01
The Partial Directed Coherence (PDC) and its generalized formulation (gPDC) are popular tools for investigating, in the frequency domain, the concept of Granger causality among multivariate (MV) time series. PDC and gPDC are formalized in terms of the coefficients of an MV autoregressive (MVAR) model which describes only the lagged effects among the time series and forsakes instantaneous effects. However, instantaneous effects are known to affect linear parametric modeling, and are likely to occur in experimental time series. In this study, we investigate the impact on the assessment of frequency domain causality of excluding instantaneous effects from the model underlying PDC evaluation. Moreover, we propose the utilization of an extended MVAR model including both instantaneous and lagged effects. This model is used to assess PDC either in accordance with the definition of Granger causality when considering only lagged effects (iPDC), or with an extended form of causality, when we consider both instantaneous and lagged effects (ePDC). The approach is first evaluated on three theoretical examples of MVAR processes, which show that the presence of instantaneous correlations may produce misleading profiles of PDC and gPDC, while ePDC and iPDC derived from the extended model provide here a correct interpretation of extended and lagged causality. It is then applied to representative examples of cardiorespiratory and EEG MV time series. They suggest that ePDC and iPDC are better interpretable than PDC and gPDC in terms of the known cardiovascular and neural physiologies.
Bastos, João Luiz Dornelles; Gigante, Denise Petrucci; Peres, Karen Glazer; Nedel, Fúlvio Borges
2007-01-01
The epidemiological literature has been limited by the absence of a theoretical framework reflecting the complexity of causal mechanisms for the occurrence of health phenomena / disease conditions. In the field of oral epidemiology, such lack of theory also prevails, since dental caries the leading topic in oral research has been often studied through a biological and reductionist viewpoint. One of the most important consequences of dental caries is dental pain (odontalgia), which has received little attention in studies with sophisticated theoretical models and powerful designs to establish causal relationships. The purpose of this study is to review the scientific literature on the determinants of odontalgia and to discuss theories proposed for the explanation of the phenomenon. Conceptual models and emerging theories on the social determinants of oral health are revised, in an attempt to build up links with the bio-psychosocial pain model, proposing a more elaborate causal model for odontalgia. The framework suggests causal pathways between social structure and oral health through material, psychosocial and behavioral pathways. Aspects of the social structure are highlighted in order to relate them to odontalgia, stressing their importance in discussions of causal relationships in oral health research.
Depression and Distortion in the Attribution of Causality
ERIC Educational Resources Information Center
Rizley, Ross
1978-01-01
Two cognitive models of depression have attracted considerable attention recently: Seligman's (1975) learned helplessness model and Beck's (1967) cognitive schema approach. Describes each model and, in two studies, evaluates the assumption that depression is associated with systematic distortion in cognition regarding causal and controlling…
Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling
ERIC Educational Resources Information Center
Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao
2013-01-01
Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…
Limited Contribution of DNA Methylation Variation to Expression Regulation in Arabidopsis thaliana.
Meng, Dazhe; Dubin, Manu; Zhang, Pei; Osborne, Edward J; Stegle, Oliver; Clark, Richard M; Nordborg, Magnus
2016-07-01
The extent to which epigenetic variation affects complex traits in natural populations is not known. We addressed this question using transcriptome and DNA methylation data from a sample of 135 sequenced A. thaliana accessions. Across individuals, expression was significantly associated with cis-methylation for hundreds of genes, and many of these associations remained significant after taking SNP effects into account. The pattern of correlations differed markedly between gene body methylation and transposable element methylation. The former was usually positively correlated with expression, and the latter usually negatively correlated, although exceptions were found in both cases. Finally, we developed graphical models of causality that adapt to a sample with heavy population structure, and used them to show that while methylation appears to affect gene expression more often than expression affects methylation, there is also strong support for both being independently controlled. In conclusion, although we find clear evidence for epigenetic regulation, both the number of loci affected and the magnitude of the effects appear to be small compared to the effect of SNPs.
Zhao, Yifan; Billings, Steve A; Wei, Hualiang; Sarrigiannis, Ptolemaios G
2012-11-01
This paper introduces an error reduction ratio-causality (ERR-causality) test that can be used to detect and track causal relationships between two signals. In comparison to the traditional Granger method, one significant advantage of the new ERR-causality test is that it can effectively detect the time-varying direction of linear or nonlinear causality between two signals without fitting a complete model. Another important advantage is that the ERR-causality test can detect both the direction of interactions and estimate the relative time shift between the two signals. Numerical examples are provided to illustrate the effectiveness of the new method together with the determination of the causality between electroencephalograph signals from different cortical sites for patients during an epileptic seizure.
Robot graphic simulation testbed
NASA Technical Reports Server (NTRS)
Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.
1991-01-01
The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.
Souza, W.R.
1999-01-01
This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1
Quantification of causal couplings via dynamical effects: A unifying perspective
NASA Astrophysics Data System (ADS)
Smirnov, Dmitry A.
2014-12-01
Quantitative characterization of causal couplings from time series is crucial in studies of complex systems of different origin. Various statistical tools for that exist and new ones are still being developed with a tendency to creating a single, universal, model-free quantifier of coupling strength. However, a clear and generally applicable way of interpreting such universal characteristics is lacking. This work suggests a general conceptual framework for causal coupling quantification, which is based on state space models and extends the concepts of virtual interventions and dynamical causal effects. Namely, two basic kinds of interventions (state space and parametric) and effects (orbital or transient and stationary or limit) are introduced, giving four families of coupling characteristics. The framework provides a unifying view of apparently different well-established measures and allows us to introduce new characteristics, always with a definite "intervention-effect" interpretation. It is shown that diverse characteristics cannot be reduced to any single coupling strength quantifier and their interpretation is inevitably model based. The proposed set of dynamical causal effect measures quantifies different aspects of "how the coupling manifests itself in the dynamics," reformulating the very question about the "causal coupling strength."
Rehder, Bob; Waldmann, Michael R
2017-02-01
Causal Bayes nets capture many aspects of causal thinking that set them apart from purely associative reasoning. However, some central properties of this normative theory routinely violated. In tasks requiring an understanding of explaining away and screening off, subjects often deviate from these principles and manifest the operation of an associative bias that we refer to as the rich-get-richer principle. This research focuses on these two failures comparing tasks in which causal scenarios are merely described (via verbal statements of the causal relations) versus experienced (via samples of data that manifest the intervariable correlations implied by the causal relations). Our key finding is that we obtained stronger deviations from normative predictions in the described conditions that highlight the instructed causal model compared to those that presented data. This counterintuitive finding indicate that a theory of causal reasoning and learning needs to integrate normative principles with biases people hold about causal relations.
Exploring Causal Models of Educational Achievement.
ERIC Educational Resources Information Center
Parkerson, Jo Ann; And Others
1984-01-01
This article evaluates five causal model of educational productivity applied to learning science in a sample of 882 fifth through eighth graders. Each model explores the relationship between achievement and a combination of eight constructs: home environment, peer group, media, ability, social environment, time on task, motivation, and…
ERIC Educational Resources Information Center
Crow, Wendell C.
This paper suggests ways in which manifest, physical attributes of graphic elements can be described and measured. It also proposes a preliminary conceptual model that accounts for the readily apparent, measurable variables in a visual message. The graphic elements that are described include format, typeface, and photographs/artwork. The…
Understanding Information Flow Interaction along Separable Causal Paths in Environmental Signals
NASA Astrophysics Data System (ADS)
Jiang, P.; Kumar, P.
2017-12-01
Multivariate environmental signals reflect the outcome of complex inter-dependencies, such as those in ecohydrologic systems. Transfer entropy and information partitioning approaches have been used to characterize such dependencies. However, these approaches capture net information flow occurring through a multitude of pathways involved in the interaction and as a result mask our ability to discern the causal interaction within an interested subsystem through specific pathways. We build on recent developments of momentary information transfer along causal paths proposed by Runge [2015] to develop a framework for quantifying information decomposition along separable causal paths. Momentary information transfer along causal paths captures the amount of information flow between any two variables lagged at two specific points in time. Our approach expands this concept to characterize the causal interaction in terms of synergistic, unique and redundant information flow through separable causal paths. Multivariate analysis using this novel approach reveals precise understanding of causality and feedback. We illustrate our approach with synthetic and observed time series data. We believe the proposed framework helps better delineate the internal structure of complex systems in geoscience where huge amounts of observational datasets exist, and it will also help the modeling community by providing a new way to look at the complexity of real and modeled systems. Runge, Jakob. "Quantifying information transfer and mediation along causal pathways in complex systems." Physical Review E 92.6 (2015): 062829.
Causal Inference in Retrospective Studies.
ERIC Educational Resources Information Center
Holland, Paul W.; Rubin, Donald B.
1988-01-01
The problem of drawing causal inferences from retrospective case-controlled studies is considered. A model for causal inference in prospective studies is applied to retrospective studies. Limitations of case-controlled studies are formulated concerning relevant parameters that can be estimated in such studies. A coffee-drinking/myocardial…
How causal analysis can reveal autonomy in models of biological systems
NASA Astrophysics Data System (ADS)
Marshall, William; Kim, Hyunju; Walker, Sara I.; Tononi, Giulio; Albantakis, Larissa
2017-11-01
Standard techniques for studying biological systems largely focus on their dynamical or, more recently, their informational properties, usually taking either a reductionist or holistic perspective. Yet, studying only individual system elements or the dynamics of the system as a whole disregards the organizational structure of the system-whether there are subsets of elements with joint causes or effects, and whether the system is strongly integrated or composed of several loosely interacting components. Integrated information theory offers a theoretical framework to (1) investigate the compositional cause-effect structure of a system and to (2) identify causal borders of highly integrated elements comprising local maxima of intrinsic cause-effect power. Here we apply this comprehensive causal analysis to a Boolean network model of the fission yeast (Schizosaccharomyces pombe) cell cycle. We demonstrate that this biological model features a non-trivial causal architecture, whose discovery may provide insights about the real cell cycle that could not be gained from holistic or reductionist approaches. We also show how some specific properties of this underlying causal architecture relate to the biological notion of autonomy. Ultimately, we suggest that analysing the causal organization of a system, including key features like intrinsic control and stable causal borders, should prove relevant for distinguishing life from non-life, and thus could also illuminate the origin of life problem. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Transformation of Graphical ECA Policies into Executable PonderTalk Code
NASA Astrophysics Data System (ADS)
Romeikat, Raphael; Sinsel, Markus; Bauer, Bernhard
Rules are becoming more and more important in business modeling and systems engineering and are recognized as a high-level programming paradigma. For the effective development of rules it is desired to start at a high level, e.g. with graphical rules, and to refine them into code of a particular rule language for implementation purposes later. An model-driven approach is presented in this paper to transform graphical rules into executable code in a fully automated way. The focus is on event-condition-action policies as a special rule type. These are modeled graphically and translated into the PonderTalk language. The approach may be extended to integrate other rule types and languages as well.
Causal relations and feature similarity in children's inductive reasoning.
Hayes, Brett K; Thompson, Susan P
2007-08-01
Four experiments examined the development of property induction on the basis of causal relations. In the first 2 studies, 5-year-olds, 8-year-olds, and adults were presented with triads in which a target instance was equally similar to 2 inductive bases but shared a causal antecedent feature with 1 of them. All 3 age groups used causal relations as a basis for property induction, although the proportion of causal inferences increased with age. Subsequent experiments pitted causal relations against featural similarity in induction. It was found that adults and 8-year-olds, but not 5-year-olds, preferred shared causal relations over strong featural similarity as a basis for induction. The implications for models of inductive reasoning and development are discussed.
What Is the Latent Variable in Causal Indicator Models?
ERIC Educational Resources Information Center
Howell, Roy D.
2014-01-01
Building on the work of Bollen (2007) and Bollen & Bauldry (2011), Bainter and Bollen (this issue) clarifies several points of confusion in the literature regarding causal indicator models. This author would certainly agree that the effect indicator (reflective) measurement model is inappropriate for some indicators (such as the social…
Causal learning with local computations.
Fernbach, Philip M; Sloman, Steven A
2009-05-01
The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require relatively small amounts of data, and need not respect normative prescriptions as inferences that are principled locally may violate those principles when combined. Over a series of 3 experiments, the authors found (a) systematic inferences from small amounts of data; (b) systematic inference of extraneous causal links; (c) influence of data presentation order on inferences; and (d) error reduction through pretraining. Without pretraining, a model based on local computations fitted data better than a Bayesian structural inference model. The data suggest that local computations serve as a heuristic for learning causal structure. Copyright 2009 APA, all rights reserved.
Courellis, Hristos; Mullen, Tim; Poizner, Howard; Cauwenberghs, Gert; Iversen, John R.
2017-01-01
Quantification of dynamic causal interactions among brain regions constitutes an important component of conducting research and developing applications in experimental and translational neuroscience. Furthermore, cortical networks with dynamic causal connectivity in brain-computer interface (BCI) applications offer a more comprehensive view of brain states implicated in behavior than do individual brain regions. However, models of cortical network dynamics are difficult to generalize across subjects because current electroencephalography (EEG) signal analysis techniques are limited in their ability to reliably localize sources across subjects. We propose an algorithmic and computational framework for identifying cortical networks across subjects in which dynamic causal connectivity is modeled among user-selected cortical regions of interest (ROIs). We demonstrate the strength of the proposed framework using a “reach/saccade to spatial target” cognitive task performed by 10 right-handed individuals. Modeling of causal cortical interactions was accomplished through measurement of cortical activity using (EEG), application of independent component clustering to identify cortical ROIs as network nodes, estimation of cortical current density using cortically constrained low resolution electromagnetic brain tomography (cLORETA), multivariate autoregressive (MVAR) modeling of representative cortical activity signals from each ROI, and quantification of the dynamic causal interaction among the identified ROIs using the Short-time direct Directed Transfer function (SdDTF). The resulting cortical network and the computed causal dynamics among its nodes exhibited physiologically plausible behavior, consistent with past results reported in the literature. This physiological plausibility of the results strengthens the framework's applicability in reliably capturing complex brain functionality, which is required by applications, such as diagnostics and BCI. PMID:28566997
How multiple causes combine: independence constraints on causal inference.
Liljeholm, Mimi
2015-01-01
According to the causal power view, two core constraints-that causes occur independently (i.e., no confounding) and influence their effects independently-serve as boundary conditions for causal induction. This study investigated how violations of these constraints modulate uncertainty about the existence and strength of a causal relationship. Participants were presented with pairs of candidate causes that were either confounded or not, and that either interacted or exerted their influences independently. Consistent with the causal power view, uncertainty about the existence and strength of causal relationships was greater when causes were confounded or interacted than when unconfounded and acting independently. An elemental Bayesian causal model captured differences in uncertainty due to confounding but not those due to an interaction. Implications of distinct sources of uncertainty for the selection of contingency information and causal generalization are discussed.
Ryberg, Karen R.; Blomquist, Joel; Sprague, Lori A.; Sekellick, Andrew J.; Keisman, Jennifer
2018-01-01
Causal attribution of changes in water quality often consists of correlation, qualitative reasoning, listing references to the work of others, or speculation. To better support statements of attribution for water-quality trends, structural equation modeling was used to model the causal factors of total phosphorus loads in the Chesapeake Bay watershed. By transforming, scaling, and standardizing variables, grouping similar sites, grouping some causal factors into latent variable models, and using methods that correct for assumption violations, we developed a structural equation model to show how causal factors interact to produce total phosphorus loads. Climate (in the form of annual total precipitation and the Palmer Hydrologic Drought Index) and anthropogenic inputs are the major drivers of total phosphorus load in the Chesapeake Bay watershed. Increasing runoff due to natural climate variability is offsetting purposeful management actions that are otherwise decreasing phosphorus loading; consequently, management actions may need to be reexamined to achieve target reductions in the face of climate variability.
Correlation of causal factors that influence construction safety performance: A model.
Rodrigues, F; Coutinho, A; Cardoso, C
2015-01-01
The construction sector has presented positive development regarding the decrease in occupational accident rates in recent years. Regardless, the construction sector stands out systematically from other industries due to its high number of fatalities. The aim of this paper is to deeply understand the causality of construction accidents from the early design phase through a model. This study reviewed several research papers presenting various analytical models that correlate the contributing factors to occupational accidents in this sector. This study also analysed different construction projects and conducted a survey of design and site supervision teams. This paper proposes a model developed from the analysis of existing ones, which correlates the causal factors through all the construction phases. It was concluded that effective risk prevention can only be achieved by a global correlation of causal factors including not only production ones but also client requirements, financial climate, design team competence, project and risk management, financial capacity, health and safety policy and early planning. Accordingly, a model is proposed.
Interactive Planning under Uncertainty with Casual Modeling and Analysis
2006-01-01
Tool ( CAT ), a system for creating and analyzing causal models similar to Bayes networks. In order to use CAT as a tool for planning, users go through...an iterative process in which they use CAT to create and an- alyze alternative plans. One of the biggest difficulties is that the number of possible...Causal Analysis Tool ( CAT ), which is a tool for representing and analyzing causal networks sim- ilar to Bayesian networks. In order to represent plans
Jackson, M E; Gnadt, J W
1999-03-01
The object-oriented graphical programming language LabView was used to implement the numerical solution to a computational model of saccade generation in primates. The computational model simulates the activity and connectivity of anatomical strictures known to be involved in saccadic eye movements. The LabView program provides a graphical user interface to the model that makes it easy to observe and modify the behavior of each element of the model. Essential elements of the source code of the LabView program are presented and explained. A copy of the model is available for download from the internet.
Testing a Model of Diabetes Self-Care Management: A Causal Model Analysis with LISREL.
ERIC Educational Resources Information Center
Nowacek, George A.; And Others
1990-01-01
A diabetes-management model is presented, which includes an attitudinal element and depicts relationships among causal elements. LISREL-VI was used to analyze data from 115 Type-I and 105 Type-II patients. The data did not closely fit the model. Results support the importance of the personal meaning of diabetes. (TJH)
Interactive voxel graphics in virtual reality
NASA Astrophysics Data System (ADS)
Brody, Bill; Chappell, Glenn G.; Hartman, Chris
2002-06-01
Interactive voxel graphics in virtual reality poses significant research challenges in terms of interface, file I/O, and real-time algorithms. Voxel graphics is not so new, as it is the focus of a good deal of scientific visualization. Interactive voxel creation and manipulation is a more innovative concept. Scientists are understandably reluctant to manipulate data. They collect or model data. A scientific analogy to interactive graphics is the generation of initial conditions for some model. It is used as a method to test those models. We, however, are in the business of creating new data in the form of graphical imagery. In our endeavor, science is a tool and not an end. Nevertheless, there is a whole class of interactions and associated data generation scenarios that are natural to our way of working and that are also appropriate to scientific inquiry. Annotation by sketching or painting to point to and distinguish interesting and important information is very significant for science as well as art. Annotation in 3D is difficult without a good 3D interface. Interactive graphics in virtual reality is an appropriate approach to this problem.
NASA Technical Reports Server (NTRS)
Hirt, E. F.; Fox, G. L.
1982-01-01
Two specific NASTRAN preprocessors and postprocessors are examined. A postprocessor for dynamic analysis and a graphical interactive package for model generation and review of resuls are presented. A computer program that provides response spectrum analysis capability based on data from NASTRAN finite element model is described and the GIFTS system, a graphic processor to augment NASTRAN is introduced.
2017-08-01
access to the GPU for general purpose processing .5 CUDA is designed to work easily with multiple programming languages , including Fortran. CUDA is a...Using Graphics Processing Unit (GPU) Computing by Leelinda P Dawson Approved for public release; distribution unlimited...The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing by Leelinda
Geometric database maintenance using CCTV cameras and overlay graphics
NASA Astrophysics Data System (ADS)
Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin
1988-01-01
An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.
Fourtune, Lisa; Prunier, Jérôme G; Paz-Vinas, Ivan; Loot, Géraldine; Veyssière, Charlotte; Blanchet, Simon
2018-04-01
Identifying landscape features that affect functional connectivity among populations is a major challenge in fundamental and applied sciences. Landscape genetics combines landscape and genetic data to address this issue, with the main objective of disentangling direct and indirect relationships among an intricate set of variables. Causal modeling has strong potential to address the complex nature of landscape genetic data sets. However, this statistical approach was not initially developed to address the pairwise distance matrices commonly used in landscape genetics. Here, we aimed to extend the applicability of two causal modeling methods-that is, maximum-likelihood path analysis and the directional separation test-by developing statistical approaches aimed at handling distance matrices and improving functional connectivity inference. Using simulations, we showed that these approaches greatly improved the robustness of the absolute (using a frequentist approach) and relative (using an information-theoretic approach) fits of the tested models. We used an empirical data set combining genetic information on a freshwater fish species (Gobio occitaniae) and detailed landscape descriptors to demonstrate the usefulness of causal modeling to identify functional connectivity in wild populations. Specifically, we demonstrated how direct and indirect relationships involving altitude, temperature, and oxygen concentration influenced within- and between-population genetic diversity of G. occitaniae.
Causal Superlearning Arising from Interactions Among Cues
Urushihara, Kouji; Miller, Ralph R.
2017-01-01
Superconditioning refers to supernormal responding to a conditioned stimulus (CS) that sometimes occurs in classical conditioning when the CS is paired with an unconditioned stimulus (US) in the presence of a conditioned inhibitor for that US. In the present research, we conducted four experiments to investigate causal superlearning, a phenomenon in human causal learning analogous to superconditioning. Experiment 1 demonstrated superlearning relative to appropriate control conditions. Experiment 2 showed that superlearning wanes when the number of cues used in an experiment is relatively large. Experiment 3 determined that even when relatively many cues are used, superlearning can be observed provided testing is conducted immediately after training, which is problematic for explanations by most contemporary learning theories. Experiment 4 found that ratings of a superlearning cue are weaker than those to the training excitor which gives basis to the conditioned inhibitor-like causal preventor used during causal superlearning training. This is inconsistent with the prediction by propositional reasoning accounts of causal cue competition, but is readily explained by associative learning models. In sum, the current experiments revealed some weaknesses of both the associative and propositional reasoning models with respect to causal superlearning. PMID:28383940
NASA Astrophysics Data System (ADS)
Borjigin, Sumuya; Yang, Yating; Yang, Xiaoguang; Sun, Leilei
2018-03-01
Many researchers have realized that there is a strong correlation between stock prices and macroeconomy. In order to make this relationship clear, a lot of studies have been done. However, the causal relationship between stock prices and macroeconomy has still not been well explained. A key point is that, most of the existing research adopts linear and stable models to investigate the correlation of stock prices and macroeconomy, while the real causality of that may be nonlinear and dynamic. To fill this research gap, we investigate the nonlinear and dynamic causal relationships between stock prices and macroeconomy. Based on the case of China's stock prices and acroeconomy measures from January 1992 to March 2017, we compare the linear Granger causality test models with nonlinear ones. Results demonstrate that the nonlinear dynamic Granger causality is much stronger than linear Granger causality. From the perspective of nonlinear dynamic Granger causality, China's stock prices can be viewed as "national economic barometer". On the one hand, this study will encourage researchers to take nonlinearity and dynamics into account when they investigate the correlation of stock prices and macroeconomy; on the other hand, our research can guide regulators and investors to make better decisions.
Causal Relations Drive Young Children's Induction, Naming, and Categorization
ERIC Educational Resources Information Center
Opfer, John E.; Bulloch, Megan J.
2007-01-01
A number of recent models and experiments have suggested that evidence of early category-based induction is an artifact of perceptual cues provided by experimenters. We tested these accounts against the prediction that different relations (causal versus non-causal) determine the types of perceptual similarity by which children generalize. Young…
The Development of Causal Structure without a Language Model
ERIC Educational Resources Information Center
Rissman, Lilia; Goldin-Meadow, Susan
2017-01-01
Across a diverse range of languages, children proceed through similar stages in their production of causal language: their initial verbs lack internal causal structure, followed by a period during which they produce causative overgeneralizations, indicating knowledge of a productive causative rule. We asked in this study whether a child not…
Contemporary Quantitative Methods and "Slow" Causal Inference: Response to Palinkas
ERIC Educational Resources Information Center
Stone, Susan
2014-01-01
This response considers together simultaneously occurring discussions about causal inference in social work and allied health and social science disciplines. It places emphasis on scholarship that integrates the potential outcomes model with directed acyclic graphing techniques to extract core steps in causal inference. Although this scholarship…
Causal Indicator Models: Unresolved Issues of Construction and Evaluation
ERIC Educational Resources Information Center
West, Stephen G.; Grimm, Kevin J.
2014-01-01
These authors agree with Bainter and Bollen that causal effects represents a useful measurement structure in some applications. The structure of the science of the measurement problem should determine the model; the measurement model should not determine the science. They also applaud Bainter and Bollen's important reminder that the full…
A Causal Model of Teacher Acceptance of Technology
ERIC Educational Resources Information Center
Chang, Jui-Ling; Lieu, Pang-Tien; Liang, Jung-Hui; Liu, Hsiang-Te; Wong, Seng-lee
2012-01-01
This study proposes a causal model for investigating teacher acceptance of technology. We received 258 effective replies from teachers at public and private universities in Taiwan. A questionnaire survey was utilized to test the proposed model. The Lisrel was applied to test the proposed hypotheses. The result shows that computer self-efficacy has…
NASA Astrophysics Data System (ADS)
Rabbitt, Matthew P.
2016-11-01
Social scientists are often interested in examining causal relationships where the outcome of interest is represented by an intangible concept, such as an individual's well-being or ability. Estimating causal relationships in this scenario is particularly challenging because the social scientist must rely on measurement models to measure individual's properties or attributes and then address issues related to survey data, such as omitted variables. In this paper, the usefulness of the recently proposed behavioural Rasch selection model is explored using a series of Monte Carlo experiments. The behavioural Rasch selection model is particularly useful for these types of applications because it is capable of estimating the causal effect of a binary treatment effect on an outcome that is represented by an intangible concept using cross-sectional data. Other methodology typically relies of summary measures from measurement models that require additional assumptions, some of which make these approaches less efficient. Recommendations for application of the behavioural Rasch selection model are made based on results from the Monte Carlo experiments.
Process and representation in graphical displays
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne
1993-01-01
Our initial model of graphic comprehension has focused on statistical graphs. Like other models of human-computer interaction, models of graphical comprehension can be used by human-computer interface designers and developers to create interfaces that present information in an efficient and usable manner. Our investigation of graph comprehension addresses two primary questions: how do people represent the information contained in a data graph?; and how do they process information from the graph? The topics of focus for graphic representation concern the features into which people decompose a graph and the representations of the graph in memory. The issue of processing can be further analyzed as two questions: what overall processing strategies do people use?; and what are the specific processing skills required?
Translating context to causality in cardiovascular disparities research.
Benn, Emma K T; Goldfeld, Keith S
2016-04-01
Moving from a descriptive focus to a comprehensive analysis grounded in causal inference can be particularly daunting for disparities researchers. However, even a simple model supported by the theoretical underpinnings of causality gives researchers a better chance to make correct inferences about possible interventions that can benefit our most vulnerable populations. This commentary provides a brief description of how race/ethnicity and context relate to questions of causality, and uses a hypothetical scenario to explore how different researchers might analyze the data to estimate causal effects of interest. Perhaps although not entirely removed of bias, these causal estimates will move us a step closer to understanding how to intervene. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
Inferring interventional predictions from observational learning data.
Meder, Bjorn; Hagmayer, York; Waldmann, Michael R
2008-02-01
Previous research has shown that people are capable of deriving correct predictions for previously unseen actions from passive observations of causal systems (Waldmann & Hagmayer, 2005). However, these studies were limited, since learning data were presented as tabulated data only, which may have turned the task more into a reasoning rather than a learning task. In two experiments, we therefore presented learners with trial-by-trial observational learning input referring to a complex causal model consisting of four events. To test the robustness of the capacity to derive correct observational and interventional inferences, we pitted causal order against the temporal order of learning events. The results show that people are, in principle, capable of deriving correct predictions after purely observational trial-by-trial learning, even with relatively complex causal models. However, conflicting temporal information can impair performance, particularly when the inferences require taking alternative causal pathways into account.
Three Cs in measurement models: causal indicators, composite indicators, and covariates.
Bollen, Kenneth A; Bauldry, Shawn
2011-09-01
In the last 2 decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that one can classify indicators into 2 categories: effect (reflective) indicators and causal (formative) indicators. We argue that the dichotomous view is too simple. Instead, there are effect indicators and 3 types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the "Three Cs"). Causal indicators have conceptual unity, and their effects on latent variables are structural. Covariates are not concept measures, but are variables to control to avoid bias in estimating the relations between measures and latent variables. Composite (formative) indicators form exact linear combinations of variables that need not share a concept. Their coefficients are weights rather than structural effects, and composites are a matter of convenience. The failure to distinguish the Three Cs has led to confusion and questions, such as, Are causal and formative indicators different names for the same indicator type? Should an equation with causal or formative indicators have an error term? Are the coefficients of causal indicators less stable than effect indicators? Distinguishing between causal and composite indicators and covariates goes a long way toward eliminating this confusion. We emphasize the key role that subject matter expertise plays in making these distinctions. We provide new guidelines for working with these variable types, including identification of models, scaling latent variables, parameter estimation, and validity assessment. A running empirical example on self-perceived health illustrates our major points.
Formalizing Neurath's ship: Approximate algorithms for online causal learning.
Bramley, Neil R; Dayan, Peter; Griffiths, Thomas L; Lagnado, David A
2017-04-01
Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Task-Analytic Design of Graphic Presentations
1990-05-18
important premise of Larkin and Simon’s work is that, when comparing alternative presentations, it is fruitful to characterize graphic-based problem solving...using the same information-processing models used to help understand problem solving using other representations [Newell and Simon, 19721...luring execution of graphic presentation- 4 based problem -solving procedures. Chapter 2 reviews other work related to the problem of designing graphic
Model robustness as a confirmatory virtue: The case of climate science.
Lloyd, Elisabeth A
2015-02-01
I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independently-supported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present climate models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists' discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantum Common Causes and Quantum Causal Models
NASA Astrophysics Data System (ADS)
Allen, John-Mark A.; Barrett, Jonathan; Horsman, Dominic C.; Lee, Ciarán M.; Spekkens, Robert W.
2017-07-01
Reichenbach's principle asserts that if two observed variables are found to be correlated, then there should be a causal explanation of these correlations. Furthermore, if the explanation is in terms of a common cause, then the conditional probability distribution over the variables given the complete common cause should factorize. The principle is generalized by the formalism of causal models, in which the causal relationships among variables constrain the form of their joint probability distribution. In the quantum case, however, the observed correlations in Bell experiments cannot be explained in the manner Reichenbach's principle would seem to demand. Motivated by this, we introduce a quantum counterpart to the principle. We demonstrate that under the assumption that quantum dynamics is fundamentally unitary, if a quantum channel with input A and outputs B and C is compatible with A being a complete common cause of B and C , then it must factorize in a particular way. Finally, we show how to generalize our quantum version of Reichenbach's principle to a formalism for quantum causal models and provide examples of how the formalism works.
Reconstructing constructivism: causal models, Bayesian learning mechanisms, and the theory theory.
Gopnik, Alison; Wellman, Henry M
2012-11-01
We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and nontechnical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and the psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists.
Bramley, Neil R; Lagnado, David A; Speekenbrink, Maarten
2015-05-01
Interacting with a system is key to uncovering its causal structure. A computational framework for interventional causal learning has been developed over the last decade, but how real causal learners might achieve or approximate the computations entailed by this framework is still poorly understood. Here we describe an interactive computer task in which participants were incentivized to learn the structure of probabilistic causal systems through free selection of multiple interventions. We develop models of participants' intervention choices and online structure judgments, using expected utility gain, probability gain, and information gain and introducing plausible memory and processing constraints. We find that successful participants are best described by a model that acts to maximize information (rather than expected score or probability of being correct); that forgets much of the evidence received in earlier trials; but that mitigates this by being conservative, preferring structures consistent with earlier stated beliefs. We explore 2 heuristics that partly explain how participants might be approximating these models without explicitly representing or updating a hypothesis space. (c) 2015 APA, all rights reserved).
Modeling the Citation Network by Network Cosmology
Xie, Zheng; Ouyang, Zhenzheng; Zhang, Pengyuan; Yi, Dongyun; Kong, Dexing
2015-01-01
Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well. PMID:25807397
Modeling the citation network by network cosmology.
Xie, Zheng; Ouyang, Zhenzheng; Zhang, Pengyuan; Yi, Dongyun; Kong, Dexing
2015-01-01
Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well.
A causal analysis framework for land-use change and the potential role of bioenergy policy
Efroymson, Rebecca A.; Kline, Keith L.; Angelsen, Arild; ...
2016-10-05
Here we propose a causal analysis framework to increase the reliability of land-use change (LUC) models and the accuracy of net greenhouse gas (GHG) emissions calculations for biofuels. The health-sciences-inspired framework is used here to determine probable causes of LUC, with an emphasis on bioenergy and deforestation. Calculations of net GHG emissions for LUC are critical in determining whether a fuel qualifies as a biofuel or advanced biofuel category under national (U.S., U.K.), state (California), and European Union regulations. Biofuel policymakers and scientists continue to discuss whether presumed indirect land-use change (ILUC) estimates, which often involve deforestation, should be includedmore » in GHG accounting for biofuel pathways. Current estimates of ILUC for bioenergy rely largely on economic simulation models that focus on causal pathways involving global commodity trade and use coarse land cover data with simple land classification systems. ILUC estimates are highly uncertain, partly because changes are not clearly defined and key causal links are not sufficiently included in the models. The proposed causal analysis framework begins with a definition of the change that has occurred and proceeds to a strength-of-evidence approach based on types of epidemiological evidence including plausibility of the relationship, completeness of the causal pathway, spatial co-occurrence, time order, analogous agents, simulation model results, and quantitative agent response relationships.Lastly, we discuss how LUC may be allocated among probable causes for policy purposes and how the application of the framework has the potential to increase the validity of LUC models and resolve ILUC and biofuel controversies.« less
Cox, Louis Anthony Tony
2017-08-01
Concentration-response (C-R) functions relating concentrations of pollutants in ambient air to mortality risks or other adverse health effects provide the basis for many public health risk assessments, benefits estimates for clean air regulations, and recommendations for revisions to existing air quality standards. The assumption that C-R functions relating levels of exposure and levels of response estimated from historical data usefully predict how future changes in concentrations would change risks has seldom been carefully tested. This paper critically reviews literature on C-R functions for fine particulate matter (PM2.5) and mortality risks. We find that most of them describe historical associations rather than valid causal models for predicting effects of interventions that change concentrations. The few papers that explicitly attempt to model causality rely on unverified modeling assumptions, casting doubt on their predictions about effects of interventions. A large literature on modern causal inference algorithms for observational data has been little used in C-R modeling. Applying these methods to publicly available data from Boston and the South Coast Air Quality Management District around Los Angeles shows that C-R functions estimated for one do not hold for the other. Changes in month-specific PM2.5 concentrations from one year to the next do not help to predict corresponding changes in average elderly mortality rates in either location. Thus, the assumption that estimated C-R relations predict effects of pollution-reducing interventions may not be true. Better causal modeling methods are needed to better predict how reducing air pollution would affect public health.
A causal analysis framework for land-use change and the potential role of bioenergy policy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Efroymson, Rebecca A.; Kline, Keith L.; Angelsen, Arild
Here we propose a causal analysis framework to increase the reliability of land-use change (LUC) models and the accuracy of net greenhouse gas (GHG) emissions calculations for biofuels. The health-sciences-inspired framework is used here to determine probable causes of LUC, with an emphasis on bioenergy and deforestation. Calculations of net GHG emissions for LUC are critical in determining whether a fuel qualifies as a biofuel or advanced biofuel category under national (U.S., U.K.), state (California), and European Union regulations. Biofuel policymakers and scientists continue to discuss whether presumed indirect land-use change (ILUC) estimates, which often involve deforestation, should be includedmore » in GHG accounting for biofuel pathways. Current estimates of ILUC for bioenergy rely largely on economic simulation models that focus on causal pathways involving global commodity trade and use coarse land cover data with simple land classification systems. ILUC estimates are highly uncertain, partly because changes are not clearly defined and key causal links are not sufficiently included in the models. The proposed causal analysis framework begins with a definition of the change that has occurred and proceeds to a strength-of-evidence approach based on types of epidemiological evidence including plausibility of the relationship, completeness of the causal pathway, spatial co-occurrence, time order, analogous agents, simulation model results, and quantitative agent response relationships.Lastly, we discuss how LUC may be allocated among probable causes for policy purposes and how the application of the framework has the potential to increase the validity of LUC models and resolve ILUC and biofuel controversies.« less
Stereoscopic 3D graphics generation
NASA Astrophysics Data System (ADS)
Li, Zhi; Liu, Jianping; Zan, Y.
1997-05-01
Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.
Causality and headache triggers
Turner, Dana P.; Smitherman, Todd A.; Martin, Vincent T.; Penzien, Donald B.; Houle, Timothy T.
2013-01-01
Objective The objective of this study was to explore the conditions necessary to assign causal status to headache triggers. Background The term “headache trigger” is commonly used to label any stimulus that is assumed to cause headaches. However, the assumptions required for determining if a given stimulus in fact has a causal-type relationship in eliciting headaches have not been explicated. Methods A synthesis and application of Rubin’s Causal Model is applied to the context of headache causes. From this application the conditions necessary to infer that one event (trigger) causes another (headache) are outlined using basic assumptions and examples from relevant literature. Results Although many conditions must be satisfied for a causal attribution, three basic assumptions are identified for determining causality in headache triggers: 1) constancy of the sufferer; 2) constancy of the trigger effect; and 3) constancy of the trigger presentation. A valid evaluation of a potential trigger’s effect can only be undertaken once these three basic assumptions are satisfied during formal or informal studies of headache triggers. Conclusions Evaluating these assumptions is extremely difficult or infeasible in clinical practice, and satisfying them during natural experimentation is unlikely. Researchers, practitioners, and headache sufferers are encouraged to avoid natural experimentation to determine the causal effects of headache triggers. Instead, formal experimental designs or retrospective diary studies using advanced statistical modeling techniques provide the best approaches to satisfy the required assumptions and inform causal statements about headache triggers. PMID:23534872
Youssofzadeh, Vahab; Prasad, Girijesh; Naeem, Muhammad; Wong-Lin, KongFatt
2016-01-01
Partial Granger causality (PGC) has been applied to analyse causal functional neural connectivity after effectively mitigating confounding influences caused by endogenous latent variables and exogenous environmental inputs. However, it is not known how this connectivity obtained from PGC evolves over time. Furthermore, PGC has yet to be tested on realistic nonlinear neural circuit models and multi-trial event-related potentials (ERPs) data. In this work, we first applied a time-domain PGC technique to evaluate simulated neural circuit models, and demonstrated that the PGC measure is more accurate and robust in detecting connectivity patterns as compared to conditional Granger causality and partial directed coherence, especially when the circuit is intrinsically nonlinear. Moreover, the connectivity in PGC settles faster into a stable and correct configuration over time. After method verification, we applied PGC to reveal the causal connections of ERP trials of a mismatch negativity auditory oddball paradigm. The PGC analysis revealed a significant bilateral but asymmetrical localised activity in the temporal lobe close to the auditory cortex, and causal influences in the frontal, parietal and cingulate cortical areas, consistent with previous studies. Interestingly, the time to reach a stable connectivity configuration (~250–300 ms) coincides with the deviation of ensemble ERPs of oddball from standard tones. Finally, using a sliding time window, we showed higher resolution dynamics of causal connectivity within an ERP trial. In summary, time-domain PGC is promising in deciphering directed functional connectivity in nonlinear and ERP trials accurately, and at a sufficiently early stage. This data-driven approach can reduce computational time, and determine the key architecture for neural circuit modeling.
The Feasibility of Using Causal Indicators in Educational Measurement
ERIC Educational Resources Information Center
Wang, Jue; Engelhard, George, Jr.
2016-01-01
The authors of the focus article describe an important issue related to the use and interpretation of causal indicators within the context of structural equation modeling (SEM). In the focus article, the authors illustrate with simulated data the effects of omitting a causal indicator. Since SEMs are used extensively in the social and behavioral…
Cause and Event: Supporting Causal Claims through Logistic Models
ERIC Educational Resources Information Center
O'Connell, Ann A.; Gray, DeLeon L.
2011-01-01
Efforts to identify and support credible causal claims have received intense interest in the research community, particularly over the past few decades. In this paper, we focus on the use of statistical procedures designed to support causal claims for a treatment or intervention when the response variable of interest is dichotomous. We identify…
Programs as Causal Models: Speculations on Mental Programs and Mental Representation
ERIC Educational Resources Information Center
Chater, Nick; Oaksford, Mike
2013-01-01
Judea Pearl has argued that counterfactuals and causality are central to intelligence, whether natural or artificial, and has helped create a rich mathematical and computational framework for formally analyzing causality. Here, we draw out connections between these notions and various current issues in cognitive science, including the nature of…
Campbell's and Rubin's Perspectives on Causal Inference
ERIC Educational Resources Information Center
West, Stephen G.; Thoemmes, Felix
2010-01-01
Donald Campbell's approach to causal inference (D. T. Campbell, 1957; W. R. Shadish, T. D. Cook, & D. T. Campbell, 2002) is widely used in psychology and education, whereas Donald Rubin's causal model (P. W. Holland, 1986; D. B. Rubin, 1974, 2005) is widely used in economics, statistics, medicine, and public health. Campbell's approach focuses on…
ERIC Educational Resources Information Center
Ding, Lin
2014-01-01
This study seeks to test the causal influences of reasoning skills and epistemologies on student conceptual learning in physics. A causal model, integrating multiple variables that were investigated separately in the prior literature, is proposed and tested through path analysis. These variables include student preinstructional reasoning skills…
From Blickets to Synapses: Inferring Temporal Causal Networks by Observation
ERIC Educational Resources Information Center
Fernando, Chrisantha
2013-01-01
How do human infants learn the causal dependencies between events? Evidence suggests that this remarkable feat can be achieved by observation of only a handful of examples. Many computational models have been produced to explain how infants perform causal inference without explicit teaching about statistics or the scientific method. Here, we…
Links between causal effects and causal association for surrogacy evaluation in a gaussian setting.
Conlon, Anna; Taylor, Jeremy; Li, Yun; Diaz-Ordaz, Karla; Elliott, Michael
2017-11-30
Two paradigms for the evaluation of surrogate markers in randomized clinical trials have been proposed: the causal effects paradigm and the causal association paradigm. Each of these paradigms rely on assumptions that must be made to proceed with estimation and to validate a candidate surrogate marker (S) for the true outcome of interest (T). We consider the setting in which S and T are Gaussian and are generated from structural models that include an unobserved confounder. Under the assumed structural models, we relate the quantities used to evaluate surrogacy within both the causal effects and causal association frameworks. We review some of the common assumptions made to aid in estimating these quantities and show that assumptions made within one framework can imply strong assumptions within the alternative framework. We demonstrate that there is a similarity, but not exact correspondence between the quantities used to evaluate surrogacy within each framework, and show that the conditions for identifiability of the surrogacy parameters are different from the conditions, which lead to a correspondence of these quantities. Copyright © 2017 John Wiley & Sons, Ltd.
Causal Reasoning with Mental Models
2014-08-08
The initial rubric is equivalent to an exclusive disjunction between the two causal assertions. It 488 yields the following two mental models: 489...are 575 important, whereas the functions of artifacts are important (Ahn, 1998). A genetic code is 576 accordingly more critical to being a goat than
Repeated causal decision making.
Hagmayer, York; Meder, Björn
2013-01-01
Many of our decisions refer to actions that have a causal impact on the external environment. Such actions may not only allow for the mere learning of expected values or utilities but also for acquiring knowledge about the causal structure of our world. We used a repeated decision-making paradigm to examine what kind of knowledge people acquire in such situations and how they use their knowledge to adapt to changes in the decision context. Our studies show that decision makers' behavior is strongly contingent on their causal beliefs and that people exploit their causal knowledge to assess the consequences of changes in the decision problem. A high consistency between hypotheses about causal structure, causally expected values, and actual choices was observed. The experiments show that (a) existing causal hypotheses guide the interpretation of decision feedback, (b) consequences of decisions are used to revise existing causal beliefs, and (c) decision makers use the experienced feedback to induce a causal model of the choice situation even when they have no initial causal hypotheses, which (d) enables them to adapt their choices to changes of the decision problem. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Vineis, Paolo; Illari, Phyllis; Russo, Federica
2017-01-01
In the last decades, Systems Biology (including cancer research) has been driven by technology, statistical modelling and bioinformatics. In this paper we try to bring biological and philosophical thinking back. We thus aim at making different traditions of thought compatible: (a) causality in epidemiology and in philosophical theorizing-notably, the "sufficient-component-cause framework" and the "mark transmission" approach; (b) new acquisitions about disease pathogenesis, e.g. the "branched model" in cancer, and the role of biomarkers in this process; (c) the burgeoning of omics research, with a large number of "signals" and of associations that need to be interpreted. In the paper we summarize first the current views on carcinogenesis, and then explore the relevance of current philosophical interpretations of "cancer causes". We try to offer a unifying framework to incorporate biomarkers and omic data into causal models, referring to a position called "evidential pluralism". According to this view, causal reasoning is based on both "evidence of difference-making" (e.g. associations) and on "evidence of underlying biological mechanisms". We conceptualize the way scientists detect and trace signals in terms of information transmission , which is a generalization of the mark transmission theory developed by philosopher Wesley Salmon. Our approach is capable of helping us conceptualize how heterogeneous factors such as micro and macro-biological and psycho-social-are causally linked. This is important not only to understand cancer etiology, but also to design public health policies that target the right causal factors at the macro-level.
The Ising model coupled to 2d orders
NASA Astrophysics Data System (ADS)
Glaser, Lisa
2018-04-01
In this article we make first steps in coupling matter to causal set theory in the path integral. We explore the case of the Ising model coupled to the 2d discrete Einstein Hilbert action, restricted to the 2d orders. We probe the phase diagram in terms of the Wick rotation parameter β and the Ising coupling j and find that the matter and the causal sets together give rise to an interesting phase structure. The couplings give rise to five different phases. The causal sets take on random or crystalline characteristics as described in Surya (2012 Class. Quantum Grav. 29 132001) and the Ising model can be correlated or uncorrelated on the random orders and correlated, uncorrelated or anti-correlated on the crystalline orders. We find that at least one new phase transition arises, in which the Ising spins push the causal set into the crystalline phase.
Gao, Xiangyun; Huang, Shupei; Sun, Xiaoqi; Hao, Xiaoqing; An, Feng
2018-03-01
Microscopic factors are the basis of macroscopic phenomena. We proposed a network analysis paradigm to study the macroscopic financial system from a microstructure perspective. We built the cointegration network model and the Granger causality network model based on econometrics and complex network theory and chose stock price time series of the real estate industry and its upstream and downstream industries as empirical sample data. Then, we analysed the cointegration network for understanding the steady long-term equilibrium relationships and analysed the Granger causality network for identifying the diffusion paths of the potential risks in the system. The results showed that the influence from a few key stocks can spread conveniently in the system. The cointegration network and Granger causality network are helpful to detect the diffusion path between the industries. We can also identify and intervene in the transmission medium to curb risk diffusion.
Huang, Shupei; Sun, Xiaoqi; Hao, Xiaoqing; An, Feng
2018-01-01
Microscopic factors are the basis of macroscopic phenomena. We proposed a network analysis paradigm to study the macroscopic financial system from a microstructure perspective. We built the cointegration network model and the Granger causality network model based on econometrics and complex network theory and chose stock price time series of the real estate industry and its upstream and downstream industries as empirical sample data. Then, we analysed the cointegration network for understanding the steady long-term equilibrium relationships and analysed the Granger causality network for identifying the diffusion paths of the potential risks in the system. The results showed that the influence from a few key stocks can spread conveniently in the system. The cointegration network and Granger causality network are helpful to detect the diffusion path between the industries. We can also identify and intervene in the transmission medium to curb risk diffusion. PMID:29657804
Cortical circuitry implementing graphical models.
Litvak, Shai; Ullman, Shimon
2009-11-01
In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.
Naaz, Farah; Chariker, Julia H.; Pani, John R.
2013-01-01
A study was conducted to test the hypothesis that instruction with graphically integrated representations of whole and sectional neuroanatomy is especially effective for learning to recognize neural structures in sectional imagery (such as MRI images). Neuroanatomy was taught to two groups of participants using computer graphical models of the human brain. Both groups learned whole anatomy first with a three-dimensional model of the brain. One group then learned sectional anatomy using two-dimensional sectional representations, with the expectation that there would be transfer of learning from whole to sectional anatomy. The second group learned sectional anatomy by moving a virtual cutting plane through the three-dimensional model. In tests of long-term retention of sectional neuroanatomy, the group with graphically integrated representation recognized more neural structures that were known to be challenging to learn. This study demonstrates the use of graphical representation to facilitate a more elaborated (deeper) understanding of complex spatial relations. PMID:24563579
Attiaoui, Imed; Toumi, Hassen; Ammouri, Bilel; Gargouri, Ilhem
2017-05-01
This research examines the causality (For the remainder of the paper, the notion of causality refers to Granger causality.) links among renewable energy consumption (REC), CO 2 emissions (CE), non-renewable energy consumption (NREC), and economic growth (GDP) using an autoregressive distributed lag model based on the pooled mean group estimation (ARDL-PMG) and applying Granger causality tests for a panel consisting of 22 African countries for the period between 1990 and 2011. There is unidirectional and irreversible short-run causality from CE to GDP. The causal direction between CE and REC is unobservable over the short-term. Moreover, we find unidirectional, short-run causality from REC to GDP. When testing per pair of variables, there are short-run bidirectional causalities among REC, CE, and GDP. However, if we add CE to the variables REC and NREC, the causality to GDP is observable, and causality from the pair REC and NREC to economic growth is neutral. Likewise, if we add NREC to the variables GDP and REC, there is causality. There are bidirectional long-run causalities among REC, CE, and GDP, which supports the feedback assumption. Causality from GDP to REC is not strong for the panel. If we test per pair of variables, the strong causality from GDP and CE to REC is neutral. The long-run PMG estimates show that NREC and gross domestic product increase CE, whereas REC decreases CE.
Interactive computer graphics and its role in control system design of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
Interactive Gaussian Graphical Models for Discovering Depth Trends in ChemCam Data
NASA Astrophysics Data System (ADS)
Oyen, D. A.; Komurlu, C.; Lanza, N. L.
2018-04-01
Interactive Gaussian graphical models discover surface compositional features on rocks in ChemCam targets. Our approach visualizes shot-to-shot relationships among LIBS observations, and identifies the wavelengths involved in the trend.
Causal Responsibility and Counterfactuals
Lagnado, David A; Gerstenberg, Tobias; Zultan, Ro'i
2013-01-01
How do people attribute responsibility in situations where the contributions of multiple agents combine to produce a joint outcome? The prevalence of over-determination in such cases makes this a difficult problem for counterfactual theories of causal responsibility. In this article, we explore a general framework for assigning responsibility in multiple agent contexts. We draw on the structural model account of actual causation (e.g., Halpern & Pearl, 2005) and its extension to responsibility judgments (Chockler & Halpern, 2004). We review the main theoretical and empirical issues that arise from this literature and propose a novel model of intuitive judgments of responsibility. This model is a function of both pivotality (whether an agent made a difference to the outcome) and criticality (how important the agent is perceived to be for the outcome, before any actions are taken). The model explains empirical results from previous studies and is supported by a new experiment that manipulates both pivotality and criticality. We also discuss possible extensions of this model to deal with a broader range of causal situations. Overall, our approach emphasizes the close interrelations between causality, counterfactuals, and responsibility attributions. PMID:23855451
Aristotelian Causality and the Teaching of Literary Theory.
ERIC Educational Resources Information Center
Finnegan, John D.
1982-01-01
Describes how the Aristotelian model of causality can be used to help college students systematically analyze the components, point of view, organization, and purpose of a literary theory. The literary theories of Plato, Aristotle, Longinus, Sidney, Pope, Wordsworth, Coleridge, and Shelley are analyzed, using this model. (AM)
2013-06-01
simulation of complex systems (Sterman 2000, Meadows 2008): a) Causal Loop Diagrams. A Causal Loop Diagram ( CLD ) is used to represent the feedback...structure of the dynamic system. CLDs consist of variables in the system being connected by arrows to show their causal influences and relationships. It is...distribution of orders will be included in the model. 6.4.2 Causal Loop Diagrams The CLD , as seen in Figure 5, is derived from the WDA constructs for the
Creating a Strong Foundation with Engineering Design Graphics.
ERIC Educational Resources Information Center
Newcomer, Jeffrey L.; McKell, Eric K.; Raudebaugh, Robert A.; Kelley, David S.
2001-01-01
Describes the two-course engineering design graphics sequence on introductory design and graphics topics. The first course focuses on conceptual design and the development of visualization and sketching skills while the second one concentrates on detail design and parametric modeling. (Contains 28 references.) (Author/ASK)
Sparse covariance estimation in heterogeneous samples*
Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian
2015-01-01
Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189
Three Cs in Measurement Models: Causal Indicators, Composite Indicators, and Covariates
Bollen, Kenneth A.; Bauldry, Shawn
2013-01-01
In the last two decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that we can classify indicators into two categories, effect (reflective) indicators and causal (formative) indicators. This paper argues that the dichotomous view is too simple. Instead, there are effect indicators and three types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the “three Cs”). Causal indicators have conceptual unity and their effects on latent variables are structural. Covariates are not concept measures, but are variables to control to avoid bias in estimating the relations between measures and latent variable(s). Composite (formative) indicators form exact linear combinations of variables that need not share a concept. Their coefficients are weights rather than structural effects and composites are a matter of convenience. The failure to distinguish the “three Cs” has led to confusion and questions such as: are causal and formative indicators different names for the same indicator type? Should an equation with causal or formative indicators have an error term? Are the coefficients of causal indicators less stable than effect indicators? Distinguishing between causal and composite indicators and covariates goes a long way toward eliminating this confusion. We emphasize the key role that subject matter expertise plays in making these distinctions. We provide new guidelines for working with these variable types, including identification of models, scaling latent variables, parameter estimation, and validity assessment. A running empirical example on self-perceived health illustrates our major points. PMID:21767021
NASTRAN analysis of Tokamak vacuum vessel using interactive graphics
NASA Technical Reports Server (NTRS)
Miller, A.; Badrian, M.
1978-01-01
Isoparametric quadrilateral and triangular elements were used to represent the vacuum vessel shell structure. For toroidally symmetric loadings, MPCs were employed across model boundaries and rigid format 24 was invoked. Nonsymmetric loadings required the use of the cyclic symmetry analysis available with rigid format 49. NASTRAN served as an important analysis tool in the Tokamak design effort by providing a reliable means for assessing structural integrity. Interactive graphics were employed in the finite element model generation and in the post-processing of results. It was felt that model generation and checkout with interactive graphics reduced the modelling effort and debugging man-hours significantly.
Graphical Models for Ordinal Data
Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji
2014-01-01
A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267
Three Cs in Measurement Models: Causal Indicators, Composite Indicators, and Covariates
ERIC Educational Resources Information Center
Bollen, Kenneth A.; Bauldry, Shawn
2011-01-01
In the last 2 decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that one can classify indicators into 2 categories: effect (reflective) indicators and causal (formative) indicators. We argue that the dichotomous view is too simple. Instead, there are effect indicators and 3 types of…
ERIC Educational Resources Information Center
Stoel, G. L.; van Drie, J. P.; van Boxtel, C. A. M.
2015-01-01
The present study seeks to develop a pedagogy aimed at fostering a student's ability to reason causally about history. The Model of Domain Learning was used as a framework to align domain-specific content with pedagogical principles. Developing causal historical reasoning was conceptualized as a multidimensional process, in which knowledge of…
Counterfactuals and Causal Models: Introduction to the Special Issue
ERIC Educational Resources Information Center
Sloman, Steven A.
2013-01-01
Judea Pearl won the 2010 Rumelhart Prize in computational cognitive science due to his seminal contributions to the development of Bayes nets and causal Bayes nets, frameworks that are central to multiple domains of the computational study of mind. At the heart of the causal Bayes nets formalism is the notion of a counterfactual, a representation…
Elaborations for the Validation of Causal Bridging Inferences in Text Comprehension
ERIC Educational Resources Information Center
Morishima, Yasunori
2016-01-01
The validation model of causal bridging inferences proposed by Singer and colleagues (e.g., Singer in "Can J Exp Psychol," 47(2):340-359, 1993) claims that before a causal bridging inference is accepted, it must be validated by existing knowledge. For example, to understand "Dorothy took the aspirins. Her pain went away," one…
Tools for Detecting Causality in Space Systems
NASA Astrophysics Data System (ADS)
Johnson, J.; Wing, S.
2017-12-01
Complex systems such as the solar and magnetospheric envivonment often exhibit patterns of behavior that suggest underlying organizing principles. Causality is a key organizing principle that is particularly difficult to establish in strongly coupled nonlinear systems, but essential for understanding and modeling the behavior of systems. While traditional methods of time-series analysis can identify linear correlations, they do not adequately quantify the distinction between causal and coincidental dependence. We discuss tools for detecting causality including: granger causality, transfer entropy, conditional redundancy, and convergent cross maps. The tools are illustrated by applications to magnetospheric and solar physics including radiation belt, Dst (a magnetospheric state variable), substorm, and solar cycle dynamics.
Explaining quantum correlations through evolution of causal models
NASA Astrophysics Data System (ADS)
Harper, Robin; Chapman, Robert J.; Ferrie, Christopher; Granade, Christopher; Kueng, Richard; Naoumenko, Daniel; Flammia, Steven T.; Peruzzo, Alberto
2017-04-01
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multiobjective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
Integrating Surface Modeling into the Engineering Design Graphics Curriculum
ERIC Educational Resources Information Center
Hartman, Nathan W.
2006-01-01
It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…
SPV: a JavaScript Signaling Pathway Visualizer.
Calderone, Alberto; Cesareni, Gianni
2018-03-24
The visualization of molecular interactions annotated in web resources is useful to offer to users such information in a clear intuitive layout. These interactions are frequently represented as binary interactions that are laid out in free space where, different entities, cellular compartments and interaction types are hardly distinguishable. SPV (Signaling Pathway Visualizer) is a free open source JavaScript library which offers a series of pre-defined elements, compartments and interaction types meant to facilitate the representation of signaling pathways consisting of causal interactions without neglecting simple protein-protein interaction networks. freely available under Apache version 2 license; Source code: https://github.com/Sinnefa/SPV_Signaling_Pathway_Visualizer_v1.0. Language: JavaScript; Web technology: Scalable Vector Graphics; Libraries: D3.js. sinnefa@gmail.com.
NASA Technical Reports Server (NTRS)
Menzel, Christopher; Mayer, Richard J.; Edwards, Douglas D.
1991-01-01
The Process Description Capture Method (IDEF3) is one of several Integrated Computer-Aided Manufacturing (ICAM) DEFinition methods developed by the Air Force to support systems engineering activities, and in particular, to support information systems development. These methods have evolved as a distillation of 'good practice' experience by information system developers and are designed to raise the performance level of the novice practitioner to one comparable with that of an expert. IDEF3 is meant to serve as a knowledge acquisition and requirements definition tool that structures the user's understanding of how a given process, event, or system works around process descriptions. A special purpose graphical language accompanying the method serves to highlight temporal precedence and causality relationships relative to the process or event being described.
Light reflection models for computer graphics.
Greenberg, D P
1989-04-14
During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.
Causal Measurement Models: Can Criticism Stimulate Clarification?
ERIC Educational Resources Information Center
Markus, Keith A.
2016-01-01
In their 2016 work, Aguirre-Urreta et al. provided a contribution to the literature on causal measurement models that enhances clarity and stimulates further thinking. Aguirre-Urreta et al. presented a form of statistical identity involving mapping onto the portion of the parameter space involving the nomological net, relationships between the…
Explanatory Models of Illness: A Study of Within-Culture Variation
ERIC Educational Resources Information Center
Lynch, Elizabeth; Medin, Douglas
2006-01-01
The current studies explore causal models of heart attack and depression generated from American healers whom use distinct explanatory frameworks. Causal chains leading to two illnesses, heart attack and depression, were elicited from participant groups: registered nurses (RNs), energy healers, RN energy healers, and undergraduates. The…
Resources, Instruction, and Research
ERIC Educational Resources Information Center
Cohen, David K.; Raudenbush, Stephen W.; Ball, Deborah Loewenberg
2003-01-01
Many researchers who study the relations between school resources and student achievement have worked from a causal model, which typically is implicit. In this model, some resource or set of resources is the causal variable and student achievement is the outcome. In a few recent, more nuanced versions, resource effects depend on intervening…
Structural Equations and Path Analysis for Discrete Data.
ERIC Educational Resources Information Center
Winship, Christopher; Mare, Robert D.
1983-01-01
Presented is an approach to causal models in which some or all variables are discretely measured, showing that path analytic methods permit quantification of causal relationships among variables with the same flexibility and power of interpretation as is feasible in models including only continuous variables. Examples are provided. (Author/IS)
Distinguishing Valid from Invalid Causal Indicator Models
ERIC Educational Resources Information Center
Cadogan, John W.; Lee, Nick
2016-01-01
In this commentary from Issue 14, n3, authors John Cadogan and Nick Lee applaud the paper by Aguirre-Urreta, Rönkkö, and Marakas "Measurement: Interdisciplinary Research and Perspectives", 14(3), 75-97 (2016), since their explanations and simulations work toward demystifying causal indicator models, which are often used by scholars…
Design preferences and cognitive styles: experimentation by automated website synthesis.
Leung, Siu-Wai; Lee, John; Johnson, Chris; Robertson, David
2012-06-29
This article aims to demonstrate computational synthesis of Web-based experiments in undertaking experimentation on relationships among the participants' design preference, rationale, and cognitive test performance. The exemplified experiments were computationally synthesised, including the websites as materials, experiment protocols as methods, and cognitive tests as protocol modules. This work also exemplifies the use of a website synthesiser as an essential instrument enabling the participants to explore different possible designs, which were generated on the fly, before selection of preferred designs. The participants were given interactive tree and table generators so that they could explore some different ways of presenting causality information in tables and trees as the visualisation formats. The participants gave their preference ratings for the available designs, as well as their rationale (criteria) for their design decisions. The participants were also asked to take four cognitive tests, which focus on the aspects of visualisation and analogy-making. The relationships among preference ratings, rationale, and the results of cognitive tests were analysed by conservative non-parametric statistics including Wilcoxon test, Krustal-Wallis test, and Kendall correlation. In the test, 41 of the total 64 participants preferred graphical (tree-form) to tabular presentation. Despite the popular preference for graphical presentation, the given tabular presentation was generally rated to be easier than graphical presentation to interpret, especially by those who were scored lower in the visualization and analogy-making tests. This piece of evidence helps generate a hypothesis that design preferences are related to specific cognitive abilities. Without the use of computational synthesis, the experiment setup and scientific results would be impractical to obtain.
Whither Causal Models in the Neuroscience of ADHD?
ERIC Educational Resources Information Center
Coghill, Dave; Nigg, Joel; Rothenberger, Aribert; Sonuga-Barke, Edmund; Tannock, Rosemary
2005-01-01
In this paper we examine the current status of the science of ADHD from a theoretical point of view. While the field has reached the point at which a number of causal models have been proposed, it remains some distance away from demonstrating the viability of such models empirically. We identify a number of existing barriers and make proposals as…
ERIC Educational Resources Information Center
Tighe, Elizabeth L.; Wagner, Richard K.; Schatschneider, Christopher
2015-01-01
This study demonstrates the utility of applying a causal indicator modeling framework to investigate important predictors of reading comprehension in third, seventh, and tenth grade students. The results indicated that a 4-factor multiple indicator multiple indicator cause (MIMIC) model of reading comprehension provided adequate fit at each grade…
Inferring action structure and causal relationships in continuous sequences of human action.
Buchsbaum, Daphna; Griffiths, Thomas L; Plunkett, Dillon; Gopnik, Alison; Baldwin, Dare
2015-02-01
In the real world, causal variables do not come pre-identified or occur in isolation, but instead are embedded within a continuous temporal stream of events. A challenge faced by both human learners and machine learning algorithms is identifying subsequences that correspond to the appropriate variables for causal inference. A specific instance of this problem is action segmentation: dividing a sequence of observed behavior into meaningful actions, and determining which of those actions lead to effects in the world. Here we present a Bayesian analysis of how statistical and causal cues to segmentation should optimally be combined, as well as four experiments investigating human action segmentation and causal inference. We find that both people and our model are sensitive to statistical regularities and causal structure in continuous action, and are able to combine these sources of information in order to correctly infer both causal relationships and segmentation boundaries. Copyright © 2014. Published by Elsevier Inc.
Geology’s “Super Graphics” and the Public: Missed Opportunities for Geoscience Education
NASA Astrophysics Data System (ADS)
Clary, R. M.; Wandersee, J. H.
2009-12-01
The geosciences are very visual, as demonstrated by the illustration density of maps, graphs, photographs, and diagrams in introductory textbooks. As geoscience students progress, they are further exposed to advanced graphics, such as phase diagrams and subsurface seismic data visualizations. Photographs provide information from distant sites, while multivariate graphics supply a wealth of data for viewers to access. When used effectively, geology graphics have exceptional educational potential. However, geological graphic data are often presented in specialized formats, and are not easily interpreted by an uninformed viewer. In the Howe-Russell Geoscience Complex at Louisiana State University, there is a very large graphic (~ 30 ft x 6 ft) exhibited in a side hall, immediately off the main entrance hall. The graphic, divided into two obvious parts, displays in its lower section seismic data procured in the Gulf of Mexico, from near offshore Louisiana to the end of the continental shelf. The upper section of the graphic reveals drilling block information along the seismic line. Using Tufte’s model of graphic excellence and Paivio’s dual-coding theory, we analyzed the graphic in terms of data density, complexity, legibility, format, and multivariate presentation. We also observed viewers at the site on 5 occasions, and recorded their interactions with the graphic. This graphic can best be described as a Tufte “super graphic.” Its data are high in density and multivariate in nature. Various data sources are combined in a large format to provide a powerful example of a multitude of information within a convenient and condensed presentation. However, our analysis revealed that the graphic misses an opportunity to educate the non-geologist. The information and seismic “language” of the graphic is specific to the geology community, and the information is not interpreted for the lay viewer. The absence of title, descriptions, and symbol keys are detrimental. Terms are not defined. The absence of color keys and annotations is more likely to lead to an appreciation of graphic beauty, without concomitant scientific understanding. We further concluded that in its current location, constraints of space and reflective lighting prohibit the viewer from simultaneously accessing all subsurface data in a “big picture” view. The viewer is not able to fully comprehend the macro/micro aspects of the graphic design within the limited viewing space. The graphic is an example of geoscience education possibility, a possibility that is currently undermined and unrealized by lack of interpretation. Our analysis subsequently informed the development of a model to maximize the graphic’s educational potential, which can be applied to similar geological super graphics for enhanced public scientific understanding. Our model includes interactive displays that apply the auditory-visual dual coding approach to learning. Notations and aural explanations for geological features should increase viewer understanding, and produce an effective informal educational display.
Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong
2017-12-28
Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which obtained the highest precision. All adjustment strategies through logistic regression were biased for causal effect estimation, while IPW-based-MSM could always obtain unbiased estimation when the adjusted set satisfied G-admissibility. Thus, IPW-based-MSM was recommended to adjust for confounders set.
Learning a theory of causality.
Goodman, Noah D; Ullman, Tomer D; Tenenbaum, Joshua B
2011-01-01
The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned--an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.
Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory
Gopnik, Alison; Wellman, Henry M.
2012-01-01
We propose a new version of the “theory theory” grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and non-technical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists. PMID:22582739
Comparison of six methods for the detection of causality in a bivariate time series
NASA Astrophysics Data System (ADS)
Krakovská, Anna; Jakubík, Jozef; Chvosteková, Martina; Coufal, David; Jajcay, Nikola; Paluš, Milan
2018-04-01
In this comparative study, six causality detection methods were compared, namely, the Granger vector autoregressive test, the extended Granger test, the kernel version of the Granger test, the conditional mutual information (transfer entropy), the evaluation of cross mappings between state spaces, and an assessment of predictability improvement due to the use of mixed predictions. Seven test data sets were analyzed: linear coupling of autoregressive models, a unidirectional connection of two Hénon systems, a unidirectional connection of chaotic systems of Rössler and Lorenz type and of two different Rössler systems, an example of bidirectionally connected two-species systems, a fishery model as an example of two correlated observables without a causal relationship, and an example of mediated causality. We tested not only 20 000 points long clean time series but also noisy and short variants of the data. The standard and the extended Granger tests worked only for the autoregressive models. The remaining methods were more successful with the more complex test examples, although they differed considerably in their capability to reveal the presence and the direction of coupling and to distinguish causality from mere correlation.
The mutual causality analysis between the stock and futures markets
NASA Astrophysics Data System (ADS)
Yao, Can-Zhong; Lin, Qing-Wen
2017-07-01
In this paper we employ the conditional Granger causality model to estimate the information flow, and find that the improved model outperforms the Granger causality model in revealing the asymmetric correlation between stocks and futures in the Chinese market. First, we find that information flows estimated by Granger causality tests from futures to stocks are greater than those from stocks to futures. Additionally, average correlation coefficients capture some important characteristics between stock prices and information flows over time. Further, we find that direct information flows estimated by conditional Granger causality tests from stocks to futures are greater than those from futures to stocks. Besides, the substantial increases of information flows and direct information flows exhibit a certain degree of synchronism with the occurrences of important events. Finally, the comparative analysis with the asymmetric ratio and the bootstrap technique demonstrates the slight asymmetry of information flows and the significant asymmetry of direct information flows. It reveals that the information flows from futures to stocks are slightly greater than those in the reverse direction, while the direct information flows from stocks to futures are significantly greater than those in the reverse direction.
EasyModeller: A graphical interface to MODELLER
2010-01-01
Background MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. Findings The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. Conclusion EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system. PMID:20712861
"Head take you": causal attributions of mental illness in Jamaica.
Arthur, Carlotta M; Whitley, Rob
2015-02-01
Causal attributions are a key factor in explanatory models of illness; however, little research on causal attributions of mental illness has been conducted in developing nations in the Caribbean, including Jamaica. Explanatory models of mental illness may be important in understanding illness experience and be a crucial factor in mental health service seeking and utilization. We explored causal attributions of mental illness in Jamaica by conducting 20 focus groups, including 16 community samples, 2 patient samples, and 2 samples of caregivers of patients, with a total of 159 participants. The 5 most commonly endorsed causal attributions of mental illness are discussed: (a) drug-related causes, including ganja (marijuana); (b) biological causes, such as chemical imbalance, familial transmission, and "blood"; (c) psychological causes, including stress and thinking too much; (d) social causes, such as relationship problems and job loss; and (e) spiritual or religious causes, including Obeah. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
NASA Astrophysics Data System (ADS)
Ding, Lin
2014-12-01
This study seeks to test the causal influences of reasoning skills and epistemologies on student conceptual learning in physics. A causal model, integrating multiple variables that were investigated separately in the prior literature, is proposed and tested through path analysis. These variables include student preinstructional reasoning skills measured by the Classroom Test of Scientific Reasoning, pre- and postepistemological views measured by the Colorado Learning Attitudes about Science Survey, and pre- and postperformance on Newtonian concepts measured by the Force Concept Inventory. Students from a traditionally taught calculus-based introductory mechanics course at a research university participated in the study. Results largely support the postulated causal model and reveal strong influences of reasoning skills and preinstructional epistemology on student conceptual learning gains. Interestingly enough, postinstructional epistemology does not appear to have a significant influence on student learning gains. Moreover, pre- and postinstructional epistemology, although barely different from each other on average, have little causal connection between them.
Complex Causal Process Diagrams for Analyzing the Health Impacts of Policy Interventions
Joffe, Michael; Mindell, Jennifer
2006-01-01
Causal diagrams are rigorous tools for controlling confounding. They also can be used to describe complex causal systems, which is done routinely in communicable disease epidemiology. The use of change diagrams has advantages over static diagrams, because change diagrams are more tractable, relate better to interventions, and have clearer interpretations. Causal diagrams are a useful basis for modeling. They make assumptions explicit, provide a framework for analysis, generate testable predictions, explore the effects of interventions, and identify data gaps. Causal diagrams can be used to integrate different types of information and to facilitate communication both among public health experts and between public health experts and experts in other fields. Causal diagrams allow the use of instrumental variables, which can help control confounding and reverse causation. PMID:16449586
HiRel - Reliability/availability integrated workstation tool
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Dugan, Joanne B.
1992-01-01
The HiRel software tool is described and demonstrated by application to the mission avionics subsystem of the Advanced System Integration Demonstrations (ASID) system that utilizes the PAVE PILLAR approach. HiRel marks another accomplishment toward the goal of producing a totally integrated computer-aided design (CAD) workstation design capability. Since a reliability engineer generally represents a reliability model graphically before it can be solved, the use of a graphical input description language increases productivity and decreases the incidence of error. The graphical postprocessor module HARPO makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes. The addition of several powerful HARP modeling engines provides the user with a reliability/availability modeling capability for a wide range of system applications all integrated under a common interactive graphical input-output capability.
GPU-computing in econophysics and statistical physics
NASA Astrophysics Data System (ADS)
Preis, T.
2011-03-01
A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.
Snowden, Jonathan M; Tilden, Ellen L; Odden, Michelle C
2018-06-08
In this article, we conclude our 3-part series by focusing on several concepts that have proven useful for formulating causal questions and inferring causal effects. The process of causal inference is of key importance for physiologic childbirth science, so each concept is grounded in content related to women at low risk for perinatal complications. A prerequisite to causal inference is determining that the question of interest is causal rather than descriptive or predictive. Another critical step in defining a high-impact causal question is assessing the state of existing research for evidence of causality. We introduce 2 causal frameworks that are useful for this undertaking, Hill's causal considerations and the sufficient-component cause model. We then provide 3 steps to aid perinatal researchers in inferring causal effects in a given study. First, the researcher should formulate a rigorous and clear causal question. We introduce an example of epidural analgesia and labor progression to demonstrate this process, including the central role of temporality. Next, the researcher should assess the suitability of the given data set to answer this causal question. In randomized controlled trials, data are collected with the express purpose of answering the causal question. Investigators using observational data should also ensure that their chosen causal question is answerable with the available data. Finally, investigators should design an analysis plan that targets the causal question of interest. Some data structures (eg, time-dependent confounding by labor progress when estimating the effect of epidural analgesia on postpartum hemorrhage) require specific analytical tools to control for bias and estimate causal effects. The assumptions of consistency, exchangeability, and positivity may be especially useful in carrying out these steps. Drawing on appropriate causal concepts and considering relevant assumptions strengthens our confidence that research has reduced the likelihood of alternative explanations (eg bias, chance) and estimated a causal effect. © 2018 by the American College of Nurse-Midwives.
Mage: A Tool for Developing Interactive Instructional Graphics
ERIC Educational Resources Information Center
Pavkovic, Stephen F.
2005-01-01
Mage is a graphics program developed for visualization of three-dimensional structures of proteins and other macromolecules. An application of the Mage program is reported here for developing interactive instructional graphics files (kinemages) of much smaller scale. Examples are given illustrating features of VSEPR models, permanent dipoles,…
A Graphics Design Framework to Visualize Multi-Dimensional Economic Datasets
ERIC Educational Resources Information Center
Chandramouli, Magesh; Narayanan, Badri; Bertoline, Gary R.
2013-01-01
This study implements a prototype graphics visualization framework to visualize multidimensional data. This graphics design framework serves as a "visual analytical database" for visualization and simulation of economic models. One of the primary goals of any kind of visualization is to extract useful information from colossal volumes of…
Using Graphic Organizers in Intercultural Education
ERIC Educational Resources Information Center
Ciascai, Liliana
2009-01-01
Graphic organizers are instruments of representation, illustration and modeling of information. In the educational practice they are used for building, and systematization of knowledge. Graphic organizers are instruments that addressed mostly visual learning style, but their use is beneficial to all learners. In this paper we illustrate the use of…
High Fidelity Images--How They Affect Learning.
ERIC Educational Resources Information Center
Kwinn, Ann
1997-01-01
Discusses the use of graphics in instruction and concludes that cosmetic and motivational graphics can be more realistic and detailed for affective goals, while schematic graphics may be best for the more cognitive functions of focusing attention and presenting actual content. Domains of learning, mental models, and visualization are examined.…
SWATMOD-PREP: Graphical user interface for preparing coupled SWAT-modflow simulations
USDA-ARS?s Scientific Manuscript database
This paper presents SWATMOD-Prep, a graphical user interface that couples a SWAT watershed model with a MODFLOW groundwater flow model. The interface is based on a recently published SWAT-MODFLOW code that couples the models via mapping schemes. The spatial layout of SWATMOD-Prep guides the user t...
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
A graphical vector autoregressive modelling approach to the analysis of electronic diary data
2010-01-01
Background In recent years, electronic diaries are increasingly used in medical research and practice to investigate patients' processes and fluctuations in symptoms over time. To model dynamic dependence structures and feedback mechanisms between symptom-relevant variables, a multivariate time series method has to be applied. Methods We propose to analyse the temporal interrelationships among the variables by a structural modelling approach based on graphical vector autoregressive (VAR) models. We give a comprehensive description of the underlying concepts and explain how the dependence structure can be recovered from electronic diary data by a search over suitable constrained (graphical) VAR models. Results The graphical VAR approach is applied to the electronic diary data of 35 obese patients with and without binge eating disorder (BED). The dynamic relationships for the two subgroups between eating behaviour, depression, anxiety and eating control are visualized in two path diagrams. Results show that the two subgroups of obese patients with and without BED are distinguishable by the temporal patterns which influence their respective eating behaviours. Conclusion The use of the graphical VAR approach for the analysis of electronic diary data leads to a deeper insight into patient's dynamics and dependence structures. An increasing use of this modelling approach could lead to a better understanding of complex psychological and physiological mechanisms in different areas of medical care and research. PMID:20359333
ERIC Educational Resources Information Center
Stoel, Gerhard L.; van Drie, Jannet P.; van Boxtel, Carla A. M.
2017-01-01
This article reports an experimental study on the effects of explicit teaching on 11th grade students' ability to reason causally in history. Underpinned by the model of domain learning, explicit teaching is conceptualized as multidimensional, focusing on strategies and second-order concepts to generate and verbalize causal explanations and…
ERIC Educational Resources Information Center
Wilks, Duffy; Ratheal, Juli D'Ann
2009-01-01
The authors provide a historical overview of the development of contemporary theories of counseling and psychology in relation to determinism, probabilistic causality, indeterminate free will, and moral and legal responsibility. They propose a unique model of behavioral causality that incorporates a theory of indeterminate free will, a concept…
ERIC Educational Resources Information Center
North Carolina State Dept. of Community Colleges, Raleigh.
A two-part articulation instructional objective guide for drafting (graphic communications) is provided. Part I contains summary information on seven blocks (courses) of instruction. They are as follow: introduction; basic technical drafting; problem solving in graphics; reproduction processes; freehand drawing and sketching; graphics composition;…
The pits and falls of graphical presentation.
Sperandei, Sandro
2014-01-01
Graphics are powerful tools to communicate research results and to gain information from data. However, researchers should be careful when deciding which data to plot and the type of graphic to use, as well as other details. The consequence of bad decisions in these features varies from making research results unclear to distortions of these results, through the creation of "chartjunk" with useless information. This paper is not another tutorial about "good graphics" and "bad graphics". Instead, it presents guidelines for graphic presentation of research results and some uncommon, but useful examples to communicate basic and complex data types, especially multivariate model results, which are commonly presented only by tables. By the end, there are no answers here, just ideas meant to inspire others on how to create their own graphics.
NASA Technical Reports Server (NTRS)
Montoya, R. J.; England, J. N.; Hatfield, J. J.; Rajala, S. A.
1981-01-01
The hardware configuration, software organization, and applications software for the NASA IKONAS color graphics display system are described. The systems were created at the Langley Research Center Display Device Laboratory to develop, evaluate, and demonstrate advanced generic concepts, technology, and systems integration techniques for electronic crew station systems of future civil aircraft. A minicomputer with 64K core memory acts as a host for a raster scan graphics display generator. The architectures of the hardware system and the graphics display system are provided. The applications software features a FORTRAN-based model of an aircraft, a display system, and the utility program for real-time communications. The model accepts inputs from a two-dimensional joystick and outputs a set of aircraft states. Ongoing and planned work for image segmentation/generation, specialized graphics procedures, and higher level language user interface are discussed.
The Causal Effects of Father Absence
McLanahan, Sara; Tach, Laura; Schneider, Daniel
2014-01-01
The literature on father absence is frequently criticized for its use of cross-sectional data and methods that fail to take account of possible omitted variable bias and reverse causality. We review studies that have responded to this critique by employing a variety of innovative research designs to identify the causal effect of father absence, including studies using lagged dependent variable models, growth curve models, individual fixed effects models, sibling fixed effects models, natural experiments, and propensity score matching models. Our assessment is that studies using more rigorous designs continue to find negative effects of father absence on offspring well-being, although the magnitude of these effects is smaller than what is found using traditional cross-sectional designs. The evidence is strongest and most consistent for outcomes such as high school graduation, children’s social-emotional adjustment, and adult mental health. PMID:24489431
Bayesian network modeling: A case study of an epidemiologic system analysis of cardiovascular risk.
Fuster-Parra, P; Tauler, P; Bennasar-Veny, M; Ligęza, A; López-González, A A; Aguiló, A
2016-04-01
An extensive, in-depth study of cardiovascular risk factors (CVRF) seems to be of crucial importance in the research of cardiovascular disease (CVD) in order to prevent (or reduce) the chance of developing or dying from CVD. The main focus of data analysis is on the use of models able to discover and understand the relationships between different CVRF. In this paper a report on applying Bayesian network (BN) modeling to discover the relationships among thirteen relevant epidemiological features of heart age domain in order to analyze cardiovascular lost years (CVLY), cardiovascular risk score (CVRS), and metabolic syndrome (MetS) is presented. Furthermore, the induced BN was used to make inference taking into account three reasoning patterns: causal reasoning, evidential reasoning, and intercausal reasoning. Application of BN tools has led to discovery of several direct and indirect relationships between different CVRF. The BN analysis showed several interesting results, among them: CVLY was highly influenced by smoking being the group of men the one with highest risk in CVLY; MetS was highly influence by physical activity (PA) being again the group of men the one with highest risk in MetS, and smoking did not show any influence. BNs produce an intuitive, transparent, graphical representation of the relationships between different CVRF. The ability of BNs to predict new scenarios when hypothetical information is introduced makes BN modeling an Artificial Intelligence (AI) tool of special interest in epidemiological studies. As CVD is multifactorial the use of BNs seems to be an adequate modeling tool. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Causal Modeling of Secondary Science Students' Intentions to Enroll in Physics.
ERIC Educational Resources Information Center
Crawley, Frank E.; Black, Carolyn B.
1992-01-01
Reports a study using the causal modeling method to verify underlying causes of student interest in enrolling in physics as predicted by the theory of planned behavior. Families were identified as major referents in the social support system for physics enrollment. Course and extracurricular conflicts and fear of failure were primary beliefs…
Pretense, Counterfactuals, and Bayesian Causal Models: Why What Is Not Real Really Matters
ERIC Educational Resources Information Center
Weisberg, Deena S.; Gopnik, Alison
2013-01-01
Young children spend a large portion of their time pretending about non-real situations. Why? We answer this question by using the framework of Bayesian causal models to argue that pretending and counterfactual reasoning engage the same component cognitive abilities: disengaging with current reality, making inferences about an alternative…
ERIC Educational Resources Information Center
Liu, Jiangang; Li, Jun; Rieth, Cory A.; Huber, David E.; Tian, Jie; Lee, Kang
2011-01-01
The present study employed dynamic causal modeling to investigate the effective functional connectivity between regions of the neural network involved in top-down letter processing. We used an illusory letter detection paradigm in which participants detected letters while viewing pure noise images. When participants detected letters, the response…
ERIC Educational Resources Information Center
Weitlauf, Amy S.; Cole, David A.
2012-01-01
Attributional style models of depression in adults (Abramson et al. 1989, 1978) have been adapted for use with children; however, most applications do not consider that children's understanding of causal relations may be qualitatively different from that of adults. If children's causal attributions depend on children's level of cognitive…
ERIC Educational Resources Information Center
Hardeman, Wendy; Sutton, Stephen; Griffin, Simon; Johnston, Marie; White, Anthony; Wareham, Nicholas J.; Kinmonth, Ann Louise
2005-01-01
Theory-based intervention programmes to support health-related behaviour change aim to increase health impact and improve understanding of mechanisms of behaviour change. However, the science of intervention development remains at an early stage. We present a causal modelling approach to developing complex interventions for evaluation in…
Sex and Self-Control Theory: The Measures and Causal Model May Be Different
ERIC Educational Resources Information Center
Higgins, George E.; Tewksbury, Richard
2006-01-01
This study examines the distribution differences across sexes in key measures of self-control theory and differences in a causal model. Using cross-sectional data from juveniles ("n" = 1,500), the study shows mean-level differences in many of the self-control, risky behavior, and delinquency measures. Structural equation modeling…
Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.
2009-01-01
The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.
SpineCreator: a Graphical User Interface for the Creation of Layered Neural Models.
Cope, A J; Richmond, P; James, S S; Gurney, K; Allerton, D J
2017-01-01
There is a growing requirement in computational neuroscience for tools that permit collaborative model building, model sharing, combining existing models into a larger system (multi-scale model integration), and are able to simulate models using a variety of simulation engines and hardware platforms. Layered XML model specification formats solve many of these problems, however they are difficult to write and visualise without tools. Here we describe a new graphical software tool, SpineCreator, which facilitates the creation and visualisation of layered models of point spiking neurons or rate coded neurons without requiring the need for programming. We demonstrate the tool through the reproduction and visualisation of published models and show simulation results using code generation interfaced directly into SpineCreator. As a unique application for the graphical creation of neural networks, SpineCreator represents an important step forward for neuronal modelling.
Faes, L; Porta, A; Cucino, R; Cerutti, S; Antolini, R; Nollo, G
2004-06-01
Although the concept of transfer function is intrinsically related to an input-output relationship, the traditional and widely used estimation method merges both feedback and feedforward interactions between the two analyzed signals. This limitation may endanger the reliability of transfer function analysis in biological systems characterized by closed loop interactions. In this study, a method for estimating the transfer function between closed loop interacting signals was proposed and validated in the field of cardiovascular and cardiorespiratory variability. The two analyzed signals x and y were described by a bivariate autoregressive model, and the causal transfer function from x to y was estimated after imposing causality by setting to zero the model coefficients representative of the reverse effects from y to x. The method was tested in simulations reproducing linear open and closed loop interactions, showing a better adherence of the causal transfer function to the theoretical curves with respect to the traditional approach in presence of non-negligible reverse effects. It was then applied in ten healthy young subjects to characterize the transfer functions from respiration to heart period (RR interval) and to systolic arterial pressure (SAP), and from SAP to RR interval. In the first two cases, the causal and non-causal transfer function estimates were comparable, indicating that respiration, acting as exogenous signal, sets an open loop relationship upon SAP and RR interval. On the contrary, causal and traditional transfer functions from SAP to RR were significantly different, suggesting the presence of a considerable influence on the opposite causal direction. Thus, the proposed causal approach seems to be appropriate for the estimation of parameters, like the gain and the phase lag from SAP to RR interval, which have a large clinical and physiological relevance.
NASA Astrophysics Data System (ADS)
Fournier, Frederic
The learning environment created during this research/development constitutes a micro-laboratory allowing students at the secondary and collegial level to build a measurement system. This approach based on the concrete manufacture of measuring instruments showed that the student did not only acquire knowledge, but he developed a know-how in the technology of measuring systems and also a know-how and knowledge in experimental sciences. In conceptualizing and building his own measurement system, in a computer-assisted experimental environmental, the student performs a scientific investigation in which he must induce a causal relationship between the different variables at stake. He must then isolate this relationship by building a scheme for the control of the variables and model it in an algebraic and graphic form. We believe that this approach will allow the students to better understand the physical phenomena they will be measuring. The prototypes and software used to build these measuring instruments were evaluated and redesigned at the functional and didactic levels in order to offer a learning environment that respects in every way the competence approach and the integration between science and technology.
Petersen, Douglas B; Brown, Catherine L; Ukrainetz, Teresa A; Wise, Christine; Spencer, Trina D; Zebre, Jennifer
2014-01-01
The purpose of this study was to investigate the effect of an individualized, systematic language intervention on the personal narratives of children with autism. A single-subject, multiple-baseline design across participants and behaviors was used to examine the effect of the intervention on language features of personal narratives. Three 6- to 8-year-old boys with autism participated in 12 individual intervention sessions that targeted 2-3 story grammar elements (e.g., problem, plan) and 3-4 linguistic complexity elements (e.g., causal subordination, adverbs) selected from each participant's baseline performance. Intervention involved repeated retellings of customized model narratives and the generation of personal narratives with a systematic reduction of visual and verbal scaffolding. Independent personal narratives generated at the end of each baseline, intervention, and maintenance session were analyzed for presence and sophistication of targeted features. Graphical and statistical results showed immediate improvement in targeted language features as a function of intervention. There was mixed evidence of maintenance 2 and 7 weeks after intervention. Children with autism can benefit from an individualized, systematic intervention targeting specific narrative language features. Greater intensity of intervention may be needed to gain enduring effects for some language features.
Cox, Tony; Popken, Douglas; Ricci, Paolo F
2013-01-01
Exposures to fine particulate matter (PM2.5) in air (C) have been suspected of contributing causally to increased acute (e.g., same-day or next-day) human mortality rates (R). We tested this causal hypothesis in 100 United States cities using the publicly available NMMAPS database. Although a significant, approximately linear, statistical C-R association exists in simple statistical models, closer analysis suggests that it is not causal. Surprisingly, conditioning on other variables that have been extensively considered in previous analyses (usually using splines or other smoothers to approximate their effects), such as month of the year and mean daily temperature, suggests that they create strong, nonlinear confounding that explains the statistical association between PM2.5 and mortality rates in this data set. As this finding disagrees with conventional wisdom, we apply several different techniques to examine it. Conditional independence tests for potential causation, non-parametric classification tree analysis, Bayesian Model Averaging (BMA), and Granger-Sims causality testing, show no evidence that PM2.5 concentrations have any causal impact on increasing mortality rates. This apparent absence of a causal C-R relation, despite their statistical association, has potentially important implications for managing and communicating the uncertain health risks associated with, but not necessarily caused by, PM2.5 exposures. PMID:23983662
Confirmatory Analytic Tests of Three Causal Models Relating Job Perceptions to Job Satisfaction.
1984-12-01
Perceptions ~Job SatisfactionD I~i- Confirmatory Analysi s Precognitive Postcognitive L ft A e S T R A f T I ( C O n" " n ," , V fV f f vv r e # d o i t c e...in the causal order, and job perceptions and job satisfaction are reciprocally related; (b) a precognitive -recursive model in which job perceptions...occur after job satisfaction in the causal order and are effects but not causes of job satisfaction; and (c) a precognitive DD FOR 1473 EDITION 01O NOV
Zhang, Qin; Yao, Quanying
2018-05-01
The dynamic uncertain causality graph (DUCG) is a newly presented framework for uncertain causality representation and probabilistic reasoning. It has been successfully applied to online fault diagnoses of large, complex industrial systems, and decease diagnoses. This paper extends the DUCG to model more complex cases than what could be previously modeled, e.g., the case in which statistical data are in different groups with or without overlap, and some domain knowledge and actions (new variables with uncertain causalities) are introduced. In other words, this paper proposes to use -mode, -mode, and -mode of the DUCG to model such complex cases and then transform them into either the standard -mode or the standard -mode. In the former situation, if no directed cyclic graph is involved, the transformed result is simply a Bayesian network (BN), and existing inference methods for BNs can be applied. In the latter situation, an inference method based on the DUCG is proposed. Examples are provided to illustrate the methodology.
Rummo, Pasquale E; Guilkey, David K; Ng, Shu Wen; Meyer, Katie A; Popkin, Barry M; Reis, Jared P; Shikany, James M; Gordon-Larsen, Penny
2017-12-01
The relationship between food environment exposures and diet behaviours is unclear, possibly because the majority of studies ignore potential residual confounding. We used 20 years (1985-1986, 1992-1993 2005-2006) of data from the Coronary Artery Risk Development in Young Adults (CARDIA) study across four US cities (Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota; Oakland, California) and instrumental variables (IV) regression to obtain causal estimates of longitudinal associations between the percentage of neighbourhood food outlets (per total food outlets within 1 km network distance of respondent residence) and an a priori diet quality score, with higher scores indicating higher diet quality. To assess the presence and magnitude of bias related to residual confounding, we compared results from causal models (IV regression) to non-causal models, including ordinary least squares regression, which does not account for residual confounding at all and fixed-effects regression, which only controls for time-invariant unmeasured characteristics. The mean diet quality score across follow-up was 63.4 (SD=12.7). A 10% increase in fast food restaurants (relative to full-service restaurants) was associated with a lower diet quality score over time using IV regression (β=-1.01, 95% CI -1.99 to -0.04); estimates were attenuated using non-causal models. The percentage of neighbourhood convenience and grocery stores (relative to supermarkets) was not associated with diet quality in any model, but estimates from non-causal models were similarly attenuated compared with causal models. Ignoring residual confounding may generate biased estimated effects of neighbourhood food outlets on diet outcomes and may have contributed to weak findings in the food environment literature. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Causality in Psychiatry: A Hybrid Symptom Network Construct Model
Young, Gerald
2015-01-01
Causality or etiology in psychiatry is marked by standard biomedical, reductionistic models (symptoms reflect the construct involved) that inform approaches to nosology, or classification, such as in the DSM-5 [Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition; (1)]. However, network approaches to symptom interaction [i.e., symptoms are formative of the construct; e.g., (2), for posttraumatic stress disorder (PTSD)] are being developed that speak to bottom-up processes in mental disorder, in contrast to the typical top-down psychological construct approach. The present article presents a hybrid top-down, bottom-up model of the relationship between symptoms and mental disorder, viewing symptom expression and their causal complex as a reciprocally dynamic system with multiple levels, from lower-order symptoms in interaction to higher-order constructs affecting them. The hybrid model hinges on good understanding of systems theory in which it is embedded, so that the article reviews in depth non-linear dynamical systems theory (NLDST). The article applies the concept of emergent circular causality (3) to symptom development, as well. Conclusions consider that symptoms vary over several dimensions, including: subjectivity; objectivity; conscious motivation effort; and unconscious influences, and the degree to which individual (e.g., meaning) and universal (e.g., causal) processes are involved. The opposition between science and skepticism is a complex one that the article addresses in final comments. PMID:26635639
Hoss, Frauke; London, Alex John
2016-12-01
This paper presents a proof of concept for a graphical models approach to assessing the moral coherence and moral robustness of systems of social interactions. "Moral coherence" refers to the degree to which the rights and duties of agents within a system are effectively respected when agents in the system comply with the rights and duties that are recognized as in force for the relevant context of interaction. "Moral robustness" refers to the degree to which a system of social interaction is configured to ensure that the interests of agents are effectively respected even in the face of noncompliance. Using the case of conscientious objection of pharmacists to filling prescriptions for emergency contraception as an example, we illustrate how a graphical models approach can help stakeholders identify structural weaknesses in systems of social interaction and evaluate the relative merits of alternate organizational structures. By illustrating the merits of a graphical models approach we hope to spur further developments in this area.
Magnotti, John F; Beauchamp, Michael S
2017-02-01
Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.
Akio, Suzuki; Shunji, Awazu
2010-04-01
In order to examine the importance of fully representing graphic information items in graphic aids to facilitate comprehension of explanatory texts, we established and randomly assigned fifty university students into the following four groups: (a) participants who study the text without the aid, (b) participants who study the text with the aid, whose literal (key words) and graphic (arrows, boxes, etc.) information items are fully displayed, (c) participants who study the text with the aid, whose graphic information items are fully displayed but whose literal information items are partially displayed, and (d) participants who study the text with the aid, whose literal and graphic information items are partially displayed. The results of two kinds of comprehension tests "textbase and situation model" revealed that groups (b) and (c) outperformed groups (a) and (d). These findings suggest that graphic aids can facilitate students' text comprehension when graphic information items are fully displayed and literal information items are displayed either fully or partially; however, the aid cannot facilitate comprehension when both literal and graphic elements are displayed partially.
A Study of Current Trends and Issues Related to Technical/Engineering Design Graphics.
ERIC Educational Resources Information Center
Clark, Aaron C.; Scales Alice
2000-01-01
Presents results from a survey of engineering design graphics educators who responded to questions related to current trends and issues in the profession of graphics education. Concludes that there is a clear trend in institutions towards the teaching of constraint-based modeling and computer-aided manufacturing. (Author/YDS)
ERIC Educational Resources Information Center
Meznarich, R. A.; Shava, R. C.; Lightner, S. L.
2009-01-01
Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.
2014-09-01
semiempirical and ray-optical models. For example, the semiempirical COST-Walfisch- Ikegami model (3) estimates the received power predominantly on the...Books: Philadelphia, PA, 1965. 2. Rick, T .; Mathur, R. Fast Edge-Diffraction-Based Radio Wave Propagation Model for Graphics Hardware. Proceedings of
Graphic model of the processes involved in the production of casegood furniture
Kristen G. Hoff; Subhash C. Sarin; R. Bruce Anderson; R. Bruce Anderson
1992-01-01
Imports from foreign furniture manufacturers are on ,the rise, and American manufacturers must take advantage of recent technological advances to regain their lost market share. To facilitate the implementation of these technologies for improving productivity and quality, a graphic model of the wood furniture production process is presented using the IDEF modeling...
Causal networks clarify productivity-richness interrelations, bivariate plots do not
Grace, James B.; Adler, Peter B.; Harpole, W. Stanley; Borer, Elizabeth T.; Seabloom, Eric W.
2014-01-01
We urge ecologists to consider productivity–richness relationships through the lens of causal networks to advance our understanding beyond bivariate analysis. Further, we emphasize that models based on a causal network conceptualization can also provide more meaningful guidance for conservation management than can a bivariate perspective. Measuring only two variables does not permit the evaluation of complex ideas nor resolve debates about underlying mechanisms.
ERIC Educational Resources Information Center
Alonso-Tapia, Jesus; Villa, Jose Luis
1999-01-01
Examines the viability of using hypothetical problems that need the application of causal models for their solution as a method to assessing understanding in the social sciences. Explains that this method was used to describe how seventh-grade students understand causal factors affecting the "discovery and colonization of America." (CMK)
Hagmayer, York; Engelmann, Neele
2014-01-01
Cognitive psychological research focuses on causal learning and reasoning while cognitive anthropological and social science research tend to focus on systems of beliefs. Our aim was to explore how these two types of research can inform each other. Cognitive psychological theories (causal model theory and causal Bayes nets) were used to derive predictions for systems of causal beliefs. These predictions were then applied to lay theories of depression as a specific test case. A systematic literature review on causal beliefs about depression was conducted, including original, quantitative research. Thirty-six studies investigating 13 non-Western and 32 Western cultural groups were analyzed by classifying assumed causes and preferred forms of treatment into common categories. Relations between beliefs and treatment preferences were assessed. Substantial agreement between cultural groups was found with respect to the impact of observable causes. Stress was generally rated as most important. Less agreement resulted for hidden, especially supernatural causes. Causal beliefs were clearly related to treatment preferences in Western groups, while evidence was mostly lacking for non-Western groups. Overall predictions were supported, but there were considerable methodological limitations. Pointers to future research, which may combine studies on causal beliefs with experimental paradigms on causal reasoning, are given. PMID:25505432
Hung-Pin, Lin
2014-01-01
The purpose of this paper is to investigate the short-run and long-run causality between renewable energy (RE) consumption and economic growth (EG) in nine OECD countries from the period between 1982 and 2011. To examine the linkage, this paper uses the autoregressive distributed lag (ARDL) bounds testing approach of cointegration test and vector error-correction models to test the causal relationship between variables. The co-integration and causal relationships are found in five countries-United States of America (USA), Japan, Germany, Italy, and United Kingdom (UK). The overall results indicate that (1) a short-run unidirectional causality runs from EG to RE in Italy and UK; (2) long-run unidirectional causalities run from RE to EG for Germany, Italy, and UK; (3) a long-run unidirectional causality runs from EG to RE in USA, and Japan; (4) both long-run and strong unidirectional causalities run from RE to EG for Germany and UK; and (5) Finally, both long-run and strong unidirectional causalities run from EG to RE in only USA. Further evidence reveals that policies for renewable energy conservation may have no impact on economic growth in France, Denmark, Portugal, and Spain.
Hung-Pin, Lin
2014-01-01
The purpose of this paper is to investigate the short-run and long-run causality between renewable energy (RE) consumption and economic growth (EG) in nine OECD countries from the period between 1982 and 2011. To examine the linkage, this paper uses the autoregressive distributed lag (ARDL) bounds testing approach of cointegration test and vector error-correction models to test the causal relationship between variables. The co-integration and causal relationships are found in five countries—United States of America (USA), Japan, Germany, Italy, and United Kingdom (UK). The overall results indicate that (1) a short-run unidirectional causality runs from EG to RE in Italy and UK; (2) long-run unidirectional causalities run from RE to EG for Germany, Italy, and UK; (3) a long-run unidirectional causality runs from EG to RE in USA, and Japan; (4) both long-run and strong unidirectional causalities run from RE to EG for Germany and UK; and (5) Finally, both long-run and strong unidirectional causalities run from EG to RE in only USA. Further evidence reveals that policies for renewable energy conservation may have no impact on economic growth in France, Denmark, Portugal, and Spain. PMID:24558343
Causal network in a deafferented non-human primate brain.
Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G
2015-01-01
De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).
Structural nested mean models for assessing time-varying effect moderation.
Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A
2010-03-01
This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.
Yang, Ming-Chin; Tung, Yu-Chi
2006-01-01
Examining whether the causal relationships among the performance indicators of the balanced scorecard (BSC) framework exist in hospitals is the aim of this article. Data were collected from all twenty-one general hospitals in a public hospital system and their supervising agency for the 3-year period, 2000-2002. The results of the path analyses identified significant causal relationships among four perspectives in the BSC model. We also verified the relationships among indicators within each perspective, some of which varied as time changed. We conclude that hospital administrators can use path analysis to help them identify and manage leading indicators when adopting the BSC model. However, they should also validate causal relationships between leading and lagging indicators periodically because the management environment changes constantly.
Structural identifiability of cyclic graphical models of biological networks with latent variables.
Wang, Yulin; Lu, Na; Miao, Hongyu
2016-06-13
Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.
The power of possibility: causal learning, counterfactual reasoning, and pretend play
Buchsbaum, Daphna; Bridgers, Sophie; Skolnick Weisberg, Deena; Gopnik, Alison
2012-01-01
We argue for a theoretical link between the development of an extended period of immaturity in human evolution and the emergence of powerful and wide-ranging causal learning mechanisms, specifically the use of causal models and Bayesian learning. We suggest that exploratory childhood learning, childhood play in particular, and causal cognition are closely connected. We report an empirical study demonstrating one such connection—a link between pretend play and counterfactual causal reasoning. Preschool children given new information about a causal system made very similar inferences both when they considered counterfactuals about the system and when they engaged in pretend play about it. Counterfactual cognition and causally coherent pretence were also significantly correlated even when age, general cognitive development and executive function were controlled for. These findings link a distinctive human form of childhood play and an equally distinctive human form of causal inference. We speculate that, during human evolution, computations that were initially reserved for solving particularly important ecological problems came to be used much more widely and extensively during the long period of protected immaturity. PMID:22734063
The power of possibility: causal learning, counterfactual reasoning, and pretend play.
Buchsbaum, Daphna; Bridgers, Sophie; Skolnick Weisberg, Deena; Gopnik, Alison
2012-08-05
We argue for a theoretical link between the development of an extended period of immaturity in human evolution and the emergence of powerful and wide-ranging causal learning mechanisms, specifically the use of causal models and Bayesian learning. We suggest that exploratory childhood learning, childhood play in particular, and causal cognition are closely connected. We report an empirical study demonstrating one such connection--a link between pretend play and counterfactual causal reasoning. Preschool children given new information about a causal system made very similar inferences both when they considered counterfactuals about the system and when they engaged in pretend play about it. Counterfactual cognition and causally coherent pretence were also significantly correlated even when age, general cognitive development and executive function were controlled for. These findings link a distinctive human form of childhood play and an equally distinctive human form of causal inference. We speculate that, during human evolution, computations that were initially reserved for solving particularly important ecological problems came to be used much more widely and extensively during the long period of protected immaturity.
Non-Gaussian Methods for Causal Structure Learning.
Shimizu, Shohei
2018-05-22
Causal structure learning is one of the most exciting new topics in the fields of machine learning and statistics. In many empirical sciences including prevention science, the causal mechanisms underlying various phenomena need to be studied. Nevertheless, in many cases, classical methods for causal structure learning are not capable of estimating the causal structure of variables. This is because it explicitly or implicitly assumes Gaussianity of data and typically utilizes only the covariance structure. In many applications, however, non-Gaussian data are often obtained, which means that more information may be contained in the data distribution than the covariance matrix is capable of containing. Thus, many new methods have recently been proposed for using the non-Gaussian structure of data and inferring the causal structure of variables. This paper introduces prevention scientists to such causal structure learning methods, particularly those based on the linear, non-Gaussian, acyclic model known as LiNGAM. These non-Gaussian data analysis tools can fully estimate the underlying causal structures of variables under assumptions even in the presence of unobserved common causes. This feature is in contrast to other approaches. A simulated example is also provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, E.J.; McNeilly, G.S.
The existing National Center for Atmospheric Research (NCAR) code in the Hamburg Oceanic Carbon Cycle Circulation Model and the Hamburg Large-Scale Geostrophic Ocean General Circulation Model was modernized and reduced in size while still producing an equivalent end result. A reduction in the size of the existing code from more than 50,000 lines to approximately 7,500 lines in the new code has made the new code much easier to maintain. The existing code in Hamburg model uses legacy NCAR (including even emulated CALCOMP subrountines) graphics to display graphical output. The new code uses only current (version 3.1) NCAR subrountines.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.; Wu, Chris K.; Lin, Y. H.
1991-01-01
A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507
Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.
2008-01-01
NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.
Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M
2010-01-01
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196
Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young
2017-03-01
Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.
Alloy Design Workbench-Surface Modeling Package Developed
NASA Technical Reports Server (NTRS)
Abel, Phillip B.; Noebe, Ronald D.; Bozzolo, Guillermo H.; Good, Brian S.; Daugherty, Elaine S.
2003-01-01
NASA Glenn Research Center's Computational Materials Group has integrated a graphical user interface with in-house-developed surface modeling capabilities, with the goal of using computationally efficient atomistic simulations to aid the development of advanced aerospace materials, through the modeling of alloy surfaces, surface alloys, and segregation. The software is also ideal for modeling nanomaterials, since surface and interfacial effects can dominate material behavior and properties at this level. Through the combination of an accurate atomistic surface modeling methodology and an efficient computational engine, it is now possible to directly model these types of surface phenomenon and metallic nanostructures without a supercomputer. Fulfilling a High Operating Temperature Propulsion Components (HOTPC) project level-I milestone, a graphical user interface was created for a suite of quantum approximate atomistic materials modeling Fortran programs developed at Glenn. The resulting "Alloy Design Workbench-Surface Modeling Package" (ADW-SMP) is the combination of proven quantum approximate Bozzolo-Ferrante-Smith (BFS) algorithms (refs. 1 and 2) with a productivity-enhancing graphical front end. Written in the portable, platform independent Java programming language, the graphical user interface calls on extensively tested Fortran programs running in the background for the detailed computational tasks. Designed to run on desktop computers, the package has been deployed on PC, Mac, and SGI computer systems. The graphical user interface integrates two modes of computational materials exploration. One mode uses Monte Carlo simulations to determine lowest energy equilibrium configurations. The second approach is an interactive "what if" comparison of atomic configuration energies, designed to provide real-time insight into the underlying drivers of alloying processes.
The stochastic system approach for estimating dynamic treatments effect.
Commenges, Daniel; Gégout-Petit, Anne
2015-10-01
The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.
Prior knowledge driven Granger causality analysis on gene regulatory network discovery
Yao, Shun; Yoo, Shinjae; Yu, Dantong
2015-08-28
Our study focuses on discovering gene regulatory networks from time series gene expression data using the Granger causality (GC) model. However, the number of available time points (T) usually is much smaller than the number of target genes (n) in biological datasets. The widely applied pairwise GC model (PGC) and other regularization strategies can lead to a significant number of false identifications when n>>T. In this study, we proposed a new method, viz., CGC-2SPR (CGC using two-step prior Ridge regularization) to resolve the problem by incorporating prior biological knowledge about a target gene data set. In our simulation experiments, themore » propose new methodology CGC-2SPR showed significant performance improvement in terms of accuracy over other widely used GC modeling (PGC, Ridge and Lasso) and MI-based (MRNET and ARACNE) methods. In addition, we applied CGC-2SPR to a real biological dataset, i.e., the yeast metabolic cycle, and discovered more true positive edges with CGC-2SPR than with the other existing methods. In our research, we noticed a “ 1+1>2” effect when we combined prior knowledge and gene expression data to discover regulatory networks. Based on causality networks, we made a functional prediction that the Abm1 gene (its functions previously were unknown) might be related to the yeast’s responses to different levels of glucose. In conclusion, our research improves causality modeling by combining heterogeneous knowledge, which is well aligned with the future direction in system biology. Furthermore, we proposed a method of Monte Carlo significance estimation (MCSE) to calculate the edge significances which provide statistical meanings to the discovered causality networks. All of our data and source codes will be available under the link https://bitbucket.org/dtyu/granger-causality/wiki/Home.« less
Causal Analysis to Enhance Creative Problem-Solving: Performance and Effects on Mental Models
ERIC Educational Resources Information Center
Hester, Kimberly S.; Robledo, Issac C.; Barrett, Jamie D.; Peterson, David R.; Hougen, Dean P.; Day, Eric A.; Mumford, Michael D.
2012-01-01
In recent years, it has become apparent that knowledge is a critical component of creative thought. One form of knowledge that might be particularly important to creative thought relies on the mental models people employ to understand novel, ill-defined problems. In this study, undergraduates were given training in the use of causal relationships…
A Causal Contiguity Effect That Persists across Time Scales
ERIC Educational Resources Information Center
Kilic, Asli; Criss, Amy H.; Howard, Marc W.
2013-01-01
The contiguity effect refers to the tendency to recall an item from nearby study positions of the just recalled item. Causal models of contiguity suggest that recalled items are used as probes, causing a change in the memory state for subsequent recall attempts. Noncausal models of the contiguity effect assume the memory state is unaffected by…
Hindsight Bias Doesn't Always Come Easy: Causal Models, Cognitive Effort, and Creeping Determinism
ERIC Educational Resources Information Center
Nestler, Steffen; Blank, Hartmut; von Collani, Gernot
2008-01-01
Creeping determinism, a form of hindsight bias, refers to people's hindsight perceptions of events as being determined or inevitable. This article proposes, on the basis of a causal-model theory of creeping determinism, that the underlying processes are effortful, and hence creeping determinism should disappear when individuals lack the cognitive…
Bayes and blickets: Effects of knowledge on causal induction in children and adults
Griffiths, Thomas L.; Sobel, David M.; Tenenbaum, Joshua B.; Gopnik, Alison
2011-01-01
People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal account of how this knowledge can be used and acquired, based on analyzing causal induction as Bayesian inference. Five studies explored the predictions of this account with adults and 4-year-olds, using tasks in which participants learned about the causal properties of a set of objects. The studies varied the two factors that our Bayesian approach predicted should be relevant to causal induction: the prior probability with which causal relations exist, and the assumption of a deterministic or a probabilistic relation between cause and effect. Adults’ judgments (Experiments 1, 2, and 4) were in close correspondence with the quantitative predictions of the model, and children’s judgments (Experiments 3 and 5) agreed qualitatively with this account. PMID:21972897
Partial Granger causality--eliminating exogenous inputs and latent variables.
Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng
2008-07-15
Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.
3D Graphics For Interactive Surgical Simulation And Implant Design
NASA Astrophysics Data System (ADS)
Dev, P.; Fellingham, L. L.; Vassiliadis, A.; Woolson, S. T.; White, D. N.; Young, S. L.
1984-10-01
The combination of user-friendly, highly interactive software, 3D graphics, and the high-resolution detailed views of anatomy afforded by X-ray computer tomography and magnetic resonance imaging can provide surgeons with the ability to plan and practice complex surgeries. In addition to providing a realistic and manipulable 3D graphics display, this system can drive a milling machine in order to produce physical models of the anatomy or prosthetic devices and implants which have been designed using its interactive graphics editing facilities.
Improving aircraft conceptual design - A PHIGS interactive graphics interface for ACSYNT
NASA Technical Reports Server (NTRS)
Wampler, S. G.; Myklebust, A.; Jayaram, S.; Gelhausen, P.
1988-01-01
A CAD interface has been created for the 'ACSYNT' aircraft conceptual design code that permits the execution and control of the design process via interactive graphics menus. This CAD interface was coded entirely with the new three-dimensional graphics standard, the Programmer's Hierarchical Interactive Graphics System. The CAD/ACSYNT system is designed for use by state-of-the-art high-speed imaging work stations. Attention is given to the approaches employed in modeling, data storage, and rendering.
Quantum computation with indefinite causal structures
NASA Astrophysics Data System (ADS)
Araújo, Mateus; Guérin, Philippe Allard; Baumeler, ńmin
2017-11-01
One way to study the physical plausibility of closed timelike curves (CTCs) is to examine their computational power. This has been done for Deutschian CTCs (D-CTCs) and postselection CTCs (P-CTCs), with the result that they allow for the efficient solution of problems in PSPACE and PP, respectively. Since these are extremely powerful complexity classes, which are not expected to be solvable in reality, this can be taken as evidence that these models for CTCs are pathological. This problem is closely related to the nonlinearity of this models, which also allows, for example, cloning quantum states, in the case of D-CTCs, or distinguishing nonorthogonal quantum states, in the case of P-CTCs. In contrast, the process matrix formalism allows one to model indefinite causal structures in a linear way, getting rid of these effects and raising the possibility that its computational power is rather tame. In this paper, we show that process matrices correspond to a linear particular case of P-CTCs, and therefore that its computational power is upperbounded by that of PP. We show, furthermore, a family of processes that can violate causal inequalities but nevertheless can be simulated by a causally ordered quantum circuit with only a constant overhead, showing that indefinite causality is not necessarily hard to simulate.
Design of fuzzy cognitive maps using neural networks for predicting chaotic time series.
Song, H J; Miao, C Y; Shen, Z Q; Roel, W; Maja, D H; Francky, C
2010-12-01
As a powerful paradigm for knowledge representation and a simulation mechanism applicable to numerous research and application fields, Fuzzy Cognitive Maps (FCMs) have attracted a great deal of attention from various research communities. However, the traditional FCMs do not provide efficient methods to determine the states of the investigated system and to quantify causalities which are the very foundation of the FCM theory. Therefore in many cases, constructing FCMs for complex causal systems greatly depends on expert knowledge. The manually developed models have a substantial shortcoming due to model subjectivity and difficulties with accessing its reliability. In this paper, we propose a fuzzy neural network to enhance the learning ability of FCMs so that the automatic determination of membership functions and quantification of causalities can be incorporated with the inference mechanism of conventional FCMs. In this manner, FCM models of the investigated systems can be automatically constructed from data, and therefore are independent of the experts. Furthermore, we employ mutual subsethood to define and describe the causalities in FCMs. It provides more explicit interpretation for causalities in FCMs and makes the inference process easier to understand. To validate the performance, the proposed approach is tested in predicting chaotic time series. The simulation studies show the effectiveness of the proposed approach. Copyright © 2010 Elsevier Ltd. All rights reserved.
Advanced graphical user interface for multi-physics simulations using AMST
NASA Astrophysics Data System (ADS)
Hoffmann, Florian; Vogel, Frank
2017-07-01
Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.
Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.
2016-07-05
A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.
Representing Learning With Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence, for instance, in diagnosis and expert systems, as a unified qualitative and quantitative framework for representing and reasoning with probabilities and independencies. Their development and use spans several fields including artificial intelligence, decision theory and statistics, and provides an important bridge between these communities. This paper shows by way of example that these models can be extended to machine learning, neural networks and knowledge discovery by representing the notion of a sample on the graphical model. Not only does this allow a flexible variety of learning problems to be represented, it also provides the means for representing the goal of learning and opens the way for the automatic development of learning algorithms from specifications.
Item Screening in Graphical Loglinear Rasch Models
ERIC Educational Resources Information Center
Kreiner, Svend; Christensen, Karl Bang
2011-01-01
In behavioural sciences, local dependence and DIF are common, and purification procedures that eliminate items with these weaknesses often result in short scales with poor reliability. Graphical loglinear Rasch models (Kreiner & Christensen, in "Statistical Methods for Quality of Life Studies," ed. by M. Mesbah, F.C. Cole & M.T.…
News of the Day... view past news Central Pacific Hurricane Season Outlook for 2018 2017-18 Hawaii Wet Local Graphics National Graphics Model Output River and Lakes Climate and Past Weather Local National Model Output Climate and Past Weather Local National More... Hawaii Climate Portal Local Programs
National Centers for Environmental Prediction
Reference List Table of Contents NCEP OPERATIONAL MODEL FORECAST GRAPHICS PARALLEL/EXPERIMENTAL MODEL Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS VERIFICATION (GRID VS.OBS) WEB PAGE (NCEP EXPERIMENTAL PAGE, INTERNAL USE ONLY) Interactive web page tool for
National Centers for Environmental Prediction
Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration consists of the following components: - The NOAA Environmental Modeling System (NEMS) version of the Non updates for the 12 km parent domain and the 3 km CONUS/Alaska nests. The non-cycled nests (Hawaii, Puerto
Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units
USDA-ARS?s Scientific Manuscript database
This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...
Structural and Functional Model of Organization of Geometric and Graphic Training of the Students
ERIC Educational Resources Information Center
Poluyanov, Valery B.; Pyankova, Zhanna A.; Chukalkina, Marina I.; Smolina, Ekaterina S.
2016-01-01
The topicality of the investigated problem is stipulated by the social need for training competitive engineers with a high level of graphical literacy; especially geometric and graphic training of students and its projected results in a competence-based approach; individual characteristics and interests of the students, as well as methodological…
Using Every Word and Image: Framing Graphic Novel Instruction in the Expanded Four Resources Model
ERIC Educational Resources Information Center
Meyer, Carla K.; Jiménez, Laura M.
2017-01-01
In many classrooms, teachers have started to incorporate graphic novels in classroom instruction. However, research has suggested that some readers may have limited understanding of how to read graphic novels, which can create challenges for teachers using the medium. Drawing from a larger study, this article highlights two cases, an expert…
ERIC Educational Resources Information Center
Guerra, Norma S.
2009-01-01
Graphic organizers are powerful visual tools. The representation provides dimension and relationship to ideas and a framework for organization and elaboration. The LIBRE Stick Figure Tool is a graphic organizer for the problem-solving application of the LIBRE Model counseling approach. It resembles a "stick person" and offers the teacher and…
Visual Debugging of Object-Oriented Systems With the Unified Modeling Language
2004-03-01
to be “the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design , typography ... Graphics volume 23 no 6, pp893-901, 1999. [SHN98] Shneiderman, B. Designing the User Interface. Strategies for Effective Human-Computer Interaction...System Design Objectives ................................................................................ 44 3.3 System Architecture
A Comparison of Learning Style Models and Assessment Instruments for University Graphics Educators
ERIC Educational Resources Information Center
Harris, La Verne Abe; Sadowski, Mary S.; Birchman, Judy A.
2006-01-01
Kolb (2004) and others have defined learning style as a preference by which students learn and remember what they have learned. This presentation will include a summary of learning style research published in the "Engineering Design Graphics Journal" over the past 15 years on the topic of learning styles and graphics education. The…
Analogical and category-based inference: a theoretical integration with Bayesian causal models.
Holyoak, Keith J; Lee, Hee Seung; Lu, Hongjing
2010-11-01
A fundamental issue for theories of human induction is to specify constraints on potential inferences. For inferences based on shared category membership, an analogy, and/or a relational schema, it appears that the basic goal of induction is to make accurate and goal-relevant inferences that are sensitive to uncertainty. People can use source information at various levels of abstraction (including both specific instances and more general categories), coupled with prior causal knowledge, to build a causal model for a target situation, which in turn constrains inferences about the target. We propose a computational theory in the framework of Bayesian inference and test its predictions (parameter-free for the cases we consider) in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes. The theory proved successful in accounting for systematic patterns of judgments about interrelated types of causal inferences, including evidence that analogical inferences are partially dissociable from overall mapping quality.
A Community Based Systems Diagram of Obesity Causes.
Allender, Steven; Owen, Brynle; Kuhlberg, Jill; Lowe, Janette; Nagorcka-Smith, Phoebe; Whelan, Jill; Bell, Colin
2015-01-01
Application of system thinking to the development, implementation and evaluation of childhood obesity prevention efforts represents the cutting edge of community-based prevention. We report on an approach to developing a system oriented community perspective on the causes of obesity. Group model building sessions were conducted in a rural Australian community to address increasing childhood obesity. Stakeholders (n = 12) built a community model that progressed from connection circles to causal loop diagrams using scripts from the system dynamics literature. Participants began this work in identifying change over time in causes and effects of childhood obesity within their community. The initial causal loop diagram was then reviewed and elaborated by 50 community leaders over a full day session. The process created a causal loop diagram representing community perceptions of determinants and causes of obesity. The causal loop diagram can be broken down into four separate domains; social influences; fast food and junk food; participation in sport; and general physical activity. This causal loop diagram can provide the basis for community led planning of a prevention response that engages with multiple levels of existing settings and systems.
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred
2013-01-01
Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014
Inferring causal molecular networks: empirical assessment through a community-based effort.
Hill, Steven M; Heiser, Laura M; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K; Carlin, Daniel E; Zhang, Yang; Sokolov, Artem; Paull, Evan O; Wong, Chris K; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V; Favorov, Alexander V; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W; Long, Byron L; Noren, David P; Bisberg, Alexander J; Mills, Gordon B; Gray, Joe W; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A; Fertig, Elana J; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M; Spellman, Paul T; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach
2016-04-01
It remains unclear whether causal, rather than merely correlational, relationships in molecular networks can be inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge, which focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective, and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess inferred molecular networks in a causal sense.
DspaceOgre 3D Graphics Visualization Tool
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.
2011-01-01
This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.
) Kohala (PHKM) South Point (PHWA) Forecasts Activity Planner Hawaii Marine Aviation Fire Weather Local Activity Planner Hawaii Marine Aviation Fire Weather Local Graphics National Graphics Model Output Climate
Motivation and burnout among top amateur rugby players.
Cresswell, Scott L; Eklund, Robert C
2005-03-01
Self-determination theory has proven to be a useful theoretical explanation of the occurrence of ill-being on a variety of accounts. Self-determination theory may also provide a useful explanation of the occurrence of athlete burnout. To date, limited evidence exists to support links between motivation and burnout. To examine relationships and potential causal directions among burnout and types of motivation differing in degree of self-determination. Data were collected on burnout using the Athlete Burnout Questionnaire and Sport Motivation Scale from 392 top amateur male rugby players. Structural equation modeling procedures were employed to evaluate a measurement model and three conceptually grounded structural models. One conceptual model specified concomitant (noncausal) relationships between burnout and motivations varying in self-determination. The other conceptual models specified causal pathways between burnout and the three motivation variables considered in the investigation (i.e., intrinsic motivation, external regulation, and amotivation). Within the models, amotivation, the least self-determined type of motivation, had a large positive association with burnout. Externally regulated motivation had trivial and nonsignificant relationships with burnout. Self-determined forms of motivation (i.e., intrinsic motivation) exhibited significant negative associations with burnout. Overall the results support the potential utility of a self-determination theory explanation of burnout. As all models displayed reasonable and comparable fits, further research is required to establish the nature (concomitant vs directional causal vs reciprocal causal) of the relationship between burnout and motivation.
NASA Astrophysics Data System (ADS)
Hill, C.
2008-12-01
Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.
Employing heat maps to mine associations in structured routine care data.
Toddenroth, Dennis; Ganslandt, Thomas; Castellanos, Ixchel; Prokosch, Hans-Ulrich; Bürkle, Thomas
2014-02-01
Mining the electronic medical record (EMR) has the potential to deliver new medical knowledge about causal effects, which are hidden in statistical associations between different patient attributes. It is our goal to detect such causal mechanisms within current research projects which include e.g. the detection of determinants of imminent ICU readmission. An iterative statistical approach to examine each set of considered attribute pairs delivers potential answers but is difficult to interpret. Therefore, we aimed to improve the interpretation of the resulting matrices by the use of heat maps. We propose strategies to adapt heat maps for the search for associations and causal effects within routine EMR data. Heat maps visualize tabulated metric datasets as grid-like choropleth maps, and thus present measures of association between numerous attribute pairs clearly arranged. Basic assumptions about plausible exposures and outcomes are used to allocate distinct attribute sets to both matrix dimensions. The image then avoids certain redundant graphical elements and provides a clearer picture of the supposed associations. Specific color schemes have been chosen to incorporate preexisting information about similarities between attributes. The use of measures of association as a clustering input has been taken as a trigger to apply transformations which ensure that distance metrics always assume finite values and treat positive and negative associations in the same way. To evaluate the general capability of the approach, we conducted analyses of simulated datasets and assessed diagnostic and procedural codes in a large routine care dataset. Simulation results demonstrate that the proposed clustering procedure rearranges attributes similar to simulated statistical associations. Thus, heat maps are an excellent tool to indicate whether associations concern the same attributes or different ones, and whether affected attribute sets conform to any preexisting relationship between attributes. The dendrograms help in deciding if contiguous sequences of attributes effectively correspond to homogeneous attribute associations. The exemplary analysis of a routine care dataset revealed patterns of associations that follow plausible medical constellations for several diseases and the associated medical procedures and activities. Cases with breast cancer (ICD C50), for example, appeared to be associated with radiation therapy (8-52). In cross check, approximately 60 percent of the attribute pairs in this dataset showed a strong negative association, which can be explained by diseases treated in a medical specialty which routinely does not perform the respective procedures in these cases. The corresponding diagram clearly reflects these relationships in the shape of coherent subareas. We could demonstrate that heat maps of measures of association are effective for the visualization of patterns in routine care EMRs. The adjustable method for the assignment of attributes to image dimensions permits a balance between the display of ample information and a favorable level of graphical complexity. The scope of the search can be adapted by the use of pre-existing assumptions about plausible effects to select exposure and outcome attributes. Thus, the proposed method promises to simplify the detection of undiscovered causal effects within routine EMR data. Copyright © 2013 Elsevier B.V. All rights reserved.
Disentangling the causal relationships between work-home interference and employee health.
van Hooff, Madelon L M; Geurts, Sabine A E; Taris, Toon W; Kompier, Michiel A J; Dikkers, Josje S E; Houtman, Irene L D; van den Heuvel, Floor M M
2005-02-01
The present study was designed to investigate the causal relationships between (time- and strain-based) work-home interference and employee health. The effort-recovery theory provided the theoretical basis for this study. Two-phase longitudinal data (with a 1-year time lag) were gathered from 730 Dutch police officers to test the following hypotheses with structural equation modeling: (i) work-home interference predicts health deterioration, (ii) health complaints precede increased levels of such interference, and (iii) both processes operate. The relationship between stable and changed levels of work-home interference across time and their relationships with the course of health were tested with a group-by-time analysis of variance. Four subgroups were created that differed in starting point and the development of work-home interference across time. The normal causal model, in which strain-based (but not time-based) work-home interference was longitudinally related to increased health complaints 1 year later, fit the data well and significantly better than the reversed causal model. Although the reciprocal model also provided a good fit, it was less parsimonious than the normal causal model. In addition, both an increment in (strain-based) work-home interference across time and a long-lasting experience of high (strain-based) work-home interference were associated with a deterioration in health. It was concluded that (strain-based) work-home interference acts as a precursor of health impairment and that different patterns of (strain-based) work-home interference across time are related to different health courses. Particularly long-term experience of (strain-based) work-home interference seems responsible for an accumulation of health complaints.
NASA Astrophysics Data System (ADS)
Ben Mbarek, Mounir; Saidi, Kais; Amamri, Mounira
2018-07-01
This document investigates the causal relationship between nuclear energy (NE), pollutant emissions (CO2 emissions), gross domestic product (GDP) and renewable energy (RE) using dynamic panel data models for a global panel consisting of 18 countries (developed and developing) covering the 1990-2013 period. Our results indicate that there is a co-integration between variables. The unit root test suggests that all the variables are stationary in first differences. The paper further examines the link using the Granger causality analysis of vector error correction model, which indicates a unidirectional relationship running from GDP per capita to pollutant emissions for the developed and developing countries. However, there is a unidirectional causality from GDP per capita to RE in the short and long run. This finding confirms the conservation hypothesis. Similarly, there is no causality between NE and GDP per capita.
Cartographic symbol library considering symbol relations based on anti-aliasing graphic library
NASA Astrophysics Data System (ADS)
Mei, Yang; Li, Lin
2007-06-01
Cartographic visualization represents geographic information with a map form, which enables us retrieve useful geospatial information. In digital environment, cartographic symbol library is the base of cartographic visualization and is an essential component of Geographic Information System as well. Existing cartographic symbol libraries have two flaws. One is the display quality and the other one is relations adjusting. Statistic data presented in this paper indicate that the aliasing problem is a major factor on the symbol display quality on graphic display devices. So, effective graphic anti-aliasing methods based on a new anti-aliasing algorithm are presented and encapsulated in an anti-aliasing graphic library with the form of Component Object Model. Furthermore, cartographic visualization should represent feature relation in the way of correctly adjusting symbol relations besides displaying an individual feature. But current cartographic symbol libraries don't have this capability. This paper creates a cartographic symbol design model to implement symbol relations adjusting. Consequently the cartographic symbol library based on this design model can provide cartographic visualization with relations adjusting capability. The anti-aliasing graphic library and the cartographic symbol library are sampled and the results prove that the two libraries both have better efficiency and effect.
NASA Astrophysics Data System (ADS)
Schröter, Kai; Elmer, Florian; Trieselmann, Werner; Kreibich, Heidi; Kunz, Michael; Khazai, Bijan; Dransch, Doris; Wenzel, Friedemann; Zschau, Jochen; Merz, Bruno; Mühr, Bernhard; Kunz-Plapp, Tina; Möhrle, Stella; Bessel, Tina; Fohringer, Joachim
2014-05-01
The Central European flood of June 2013 is one of the most severe flood events that have occurred in Central Europe in the past decades. All major German river basins were affected (Rhine, Danube, and Elbe as well as the smaller Weser catchment).In terms of spatial extent and event magnitude, it was the most severe event at least since 1950. Within the current research focus on near real time forensic disaster analysis, the Center for Disaster Management and Risk Reduction Technology (CEDIM) assessed and analysed the multiple facets of the flood event from the beginning. The aim is to describe the on-going event, analyse the event sources, link the physical characteristics to the impact and consequences of the event and to understand the root causes that turn the physical event into a disaster (or prevent it from becoming disastrous). For the near real time component of this research, tools for rapid assessment and concise presentation of analysis results are essential. This contribution provides a graphical summary of the results of the CEDIM-FDA analyses on the June 2013 flood. It demonstrates the potential of visual representations for improving the communication and hence usability of findings in a rapid, intelligible and expressive way as a valuable supplement to usual event reporting. It is based on analyses of the hydrometeorological sources, the flood pathways (from satellite imagery, data extraction from social media), the resilience of the affected regions, and causal loss analysis. The prototypical representation of the FDA-results for the June 2013 flood provides an important step in the development of graphical event templates for the visualisation of forensic disaster analyses. These are intended to become a standard component of future CEDIM-FDA event activities.
ERIC Educational Resources Information Center
Benbenishty, Rami; Astor, Ron Avi; Roziner, Ilan; Wrabel, Stephani L.
2016-01-01
The present study explores the causal link between school climate, school violence, and a school's general academic performance over time using a school-level, cross-lagged panel autoregressive modeling design. We hypothesized that reductions in school violence and climate improvement would lead to schools' overall improved academic performance.…
Taking a systems approach to ecological systems
Grace, James B.
2015-01-01
Increasingly, there is interest in a systems-level understanding of ecological problems, which requires the evaluation of more complex, causal hypotheses. In this issue of the Journal of Vegetation Science, Soliveres et al. use structural equation modeling to test a causal network hypothesis about how tree canopies affect understorey communities. Historical analysis suggests structural equation modeling has been under-utilized in ecology.
ERIC Educational Resources Information Center
Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio
2016-01-01
We extend to the longitudinal setting a latent class approach that was recently introduced by Lanza, Coffman, and Xu to estimate the causal effect of a treatment. The proposed approach enables an evaluation of multiple treatment effects on subpopulations of individuals from a dynamic perspective, as it relies on a latent Markov (LM) model that is…
Examining a Causal Model of Early Drug Involvement Among Inner City Junior High School Youths.
ERIC Educational Resources Information Center
Dembo, Richard; And Others
Reflecting the need to construct more inclusive, socially and culturally relevant conceptions of drug use than currently exist, the determinants of drug involvement among inner-city youths within the context of a causal model were investigated. The drug involvement of the Black and Puerto Rican junior high school girls and boys was hypothesized to…
Flexible Environmental Modeling with Python and Open - GIS
NASA Astrophysics Data System (ADS)
Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann
2015-04-01
Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics
Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E
2017-01-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052
Kendler, K. S.; Jacobson, K.; Myers, J. M.; Eaves, L. J.
2014-01-01
Background Conduct disorder (CD) and peer deviance (PD) both powerfully predict future externalizing behaviors. Although levels of CD and PD are strongly correlated, the causal relationship between them has remained controversial and has not been examined by a genetically informative study. Method Levels of CD and PD were assessed in 746 adult male–male twin pairs at personal interview for ages 8–11, 12–14 and 15–17 years using a life history calendar. Model fitting was performed using the Mx program. Results The best-fit model indicated an active developmental relationship between CD and PD including forward transmission of both traits over time and strong causal relationships between CD and PD within time periods. The best-fit model indicated that the causal relationship for genetic risk factors was from CD to PD and was constant over time. For common environmental factors, the causal pathways ran from PD to CD and were stronger in earlier than later age periods. Conclusions A genetically informative model revealed causal pathways difficult to elucidate by other methods. Genes influence risk for CD, which, through social selection, impacts on the deviance of peers. Shared environment, through family and community processes, encourages or discourages adolescent deviant behavior, which, via social influence, alters risk for CD. Social influence is more important than social selection in childhood, but by late adolescence social selection becomes predominant. These findings have implications for prevention efforts for CD and associated externalizing disorders. PMID:17935643
PRay - A graphical user interface for interactive visualization and modification of rayinvr models
NASA Astrophysics Data System (ADS)
Fromm, T.
2016-01-01
PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development (https://sourceforge.net/projects/pray-plot-rayinvr/).
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data sets, but little research exists of how these tools compare to each other in practice. This work introduces and defines exploratory causal analysis (ECA) to address this issue along with the concept of data causality in the taxonomy of causal studies introduced in this work. The motivation is to provide a framework for exploring potential causal structures in time series data sets. ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.
Schomerus, G; Matschinger, H; Angermeyer, M C
2014-01-01
There is an ongoing debate whether biological illness explanations improve tolerance towards persons with mental illness or not. Several theoretical models have been proposed to predict the relationship between causal beliefs and social acceptance. This study uses path models to compare different theoretical predictions regarding attitudes towards persons with schizophrenia, depression and alcohol dependence. In a representative population survey in Germany (n = 3642), we elicited agreement with belief in biogenetic causes, current stress and childhood adversities as causes of either disorder as described in an unlabelled case vignette. We further elicited potentially mediating attitudes related to different theories about the consequences of biogenetic causal beliefs (attribution theory: onset responsibility, offset responsibility; genetic essentialism: differentness, dangerousness; genetic optimism: treatability) and social acceptance. For each vignette condition, we calculated a multiple mediator path model containing all variables. Biogenetic beliefs were associated with lower social acceptance in schizophrenia and depression, and with higher acceptance in alcohol dependence. In schizophrenia and depression, perceived differentness and dangerousness mediated the largest indirect effects, the consequences of biogenetic causal explanations thus being in accordance with the predictions of genetic essentialism. Psychosocial causal beliefs had differential effects: belief in current stress as a cause was associated with higher acceptance in schizophrenia, while belief in childhood adversities resulted in lower acceptance of a person with depression. Biological causal explanations seem beneficial in alcohol dependence, but harmful in schizophrenia and depression. The negative correlates of believing in childhood adversities as a cause of depression merit further exploration.
Havlicek, Martin; Jan, Jiri; Brazdil, Milan; Calhoun, Vince D.
2015-01-01
Increasing interest in understanding dynamic interactions of brain neural networks leads to formulation of sophisticated connectivity analysis methods. Recent studies have applied Granger causality based on standard multivariate autoregressive (MAR) modeling to assess the brain connectivity. Nevertheless, one important flaw of this commonly proposed method is that it requires the analyzed time series to be stationary, whereas such assumption is mostly violated due to the weakly nonstationary nature of functional magnetic resonance imaging (fMRI) time series. Therefore, we propose an approach to dynamic Granger causality in the frequency domain for evaluating functional network connectivity in fMRI data. The effectiveness and robustness of the dynamic approach was significantly improved by combining a forward and backward Kalman filter that improved estimates compared to the standard time-invariant MAR modeling. In our method, the functional networks were first detected by independent component analysis (ICA), a computational method for separating a multivariate signal into maximally independent components. Then the measure of Granger causality was evaluated using generalized partial directed coherence that is suitable for bivariate as well as multivariate data. Moreover, this metric provides identification of causal relation in frequency domain, which allows one to distinguish the frequency components related to the experimental paradigm. The procedure of evaluating Granger causality via dynamic MAR was demonstrated on simulated time series as well as on two sets of group fMRI data collected during an auditory sensorimotor (SM) or auditory oddball discrimination (AOD) tasks. Finally, a comparison with the results obtained from a standard time-invariant MAR model was provided. PMID:20561919
The Graphical Representation of the Digital Astronaut Physiology Backbone
NASA Technical Reports Server (NTRS)
Briers, Demarcus
2010-01-01
This report summarizes my internship project with the NASA Digital Astronaut Project to analyze the Digital Astronaut (DA) physiology backbone model. The Digital Astronaut Project (DAP) applies integrated physiology models to support space biomedical operations, and to assist NASA researchers in closing knowledge gaps related to human physiologic responses to space flight. The DA physiology backbone is a set of integrated physiological equations and functions that model the interacting systems of the human body. The current release of the model is HumMod (Human Model) version 1.5 and was developed over forty years at the University of Mississippi Medical Center (UMMC). The physiology equations and functions are scripted in an XML schema specifically designed for physiology modeling by Dr. Thomas G. Coleman at UMMC. Currently it is difficult to examine the physiology backbone without being knowledgeable of the XML schema. While investigating and documenting the tags and algorithms used in the XML schema, I proposed a standard methodology for a graphical representation. This standard methodology may be used to transcribe graphical representations from the DA physiology backbone. In turn, the graphical representations can allow examination of the physiological functions and equations without the need to be familiar with the computer programming languages or markup languages used by DA modeling software.
Graphic gambling warnings: how they affect emotions, cognitive responses and attitude change.
Muñoz, Yaromir; Chebat, Jean-Charles; Borges, Adilson
2013-09-01
The present study focuses on the effects of graphic warnings related to excessive gambling. It is based upon a theoretical model derived from both the Protection Motivation Theory (PMT) and the Elaboration Likelihood Model (ELM). We focus on video lottery terminal (VLT), one of the most hazardous format in the gaming industry. Our cohort consisted of 103 actual gamblers who reported previous gambling activity on VLT's on a regular basis. We assess the effectiveness of graphic warnings vs. text-only warnings and the effectiveness of two major arguments (i.e., family vs. financial disruption). A 2 × 2 factorial design was used to test the direct and combined effects of two variables (i.e., warning content and presence vs. absence of a graphic). It was found that the presence of a graphic enhances both cognitive appraisal and fear, and has positive effects on the Depth of Information Processing. In addition, graphic content combined with family disruptions is more effective for changing attitudes and complying with the warning than other combinations of the manipulated variables. It is proposed that ELM and PMT complement each other to explain the effects of warnings. Theoretical and practical implications are discussed.
Implementing Project Based Learning Approach to Graphic Design Course
ERIC Educational Resources Information Center
Riyanti, Menul Teguh; Erwin, Tuti Nuriah; Suriani, S. H.
2017-01-01
The purpose of this study was to develop a learning model based Commercial Graphic Design Drafting project-based learning approach, was chosen as a strategy in the learning product development research. University students as the target audience of this model are the students of the fifth semester Visual Communications Design Studies Program…
GRASP - A Prototype Interactive Graphic Sawing Program - (Forest Products Journal)
Luis G. Occeña; Daniel L. Schmoldt
1996-01-01
A versatile microcomputer-based interactive graphics sawing program has been developed as a tool for modeling various hardwood processes, from bucking and topping to log sawing, lumber edging, secondary processing, and even veneering. The microcomputer platform makes the tool affordable and accessible. A solid modeling basis provides the tool with a sound geometrical...
Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.
ERIC Educational Resources Information Center
Buchal, Ralph O.
2001-01-01
Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)
GRASP - A Prototype Interactive Graphic Sawing Program - (MU-IE Technical Report)
Luis G. Occeña; Daniel L. Schmoldt
1995-01-01
A versatile microcomputer-based interactive graphics program has been developed as a tool for modeling various hardwood processes, from bucking and topping to log sawing, lumber edging, secondary processing, even veneering. The microcomputer platform makes the tool affordable and accessible.A solid modeling basis provides the tool with a sound geometrical and...
Inductive Reasoning about Causally Transmitted Properties
ERIC Educational Resources Information Center
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D.; Tenenbaum, Joshua B.
2008-01-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates'…
Encoding dependence in Bayesian causal networks
USDA-ARS?s Scientific Manuscript database
Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...
Structure induction in diagnostic causal reasoning.
Meder, Björn; Mayrhofer, Ralf; Waldmann, Michael R
2014-07-01
Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner's beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in 2 studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go "beyond the information given" and use the available data to make inferences on the (unobserved) causal rather than on the (observed) data level. (c) 2014 APA, all rights reserved.
Early prediction of extreme stratospheric polar vortex states based on causal precursors
NASA Astrophysics Data System (ADS)
Kretschmer, Marlene; Runge, Jakob; Coumou, Dim
2017-08-01
Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.
Prediction versus aetiology: common pitfalls and how to avoid them.
van Diepen, Merel; Ramspek, Chava L; Jager, Kitty J; Zoccali, Carmine; Dekker, Friedo W
2017-04-01
Prediction research is a distinct field of epidemiologic research, which should be clearly separated from aetiological research. Both prediction and aetiology make use of multivariable modelling, but the underlying research aim and interpretation of results are very different. Aetiology aims at uncovering the causal effect of a specific risk factor on an outcome, adjusting for confounding factors that are selected based on pre-existing knowledge of causal relations. In contrast, prediction aims at accurately predicting the risk of an outcome using multiple predictors collectively, where the final prediction model is usually based on statistically significant, but not necessarily causal, associations in the data at hand.In both scientific and clinical practice, however, the two are often confused, resulting in poor-quality publications with limited interpretability and applicability. A major problem is the frequently encountered aetiological interpretation of prediction results, where individual variables in a prediction model are attributed causal meaning. This article stresses the differences in use and interpretation of aetiological and prediction studies, and gives examples of common pitfalls. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Merlo, Domenico F; Filiberti, Rosangela; Kobernus, Michael; Bartonova, Alena; Gamulin, Marija; Ferencic, Zeljko; Dusinska, Maria; Fucic, Aleksandra
2012-06-28
Development of graphical/visual presentations of cancer etiology caused by environmental stressors is a process that requires combining the complex biological interactions between xenobiotics in living and occupational environment with genes (gene-environment interaction) and genomic and non-genomic based disease specific mechanisms in living organisms. Traditionally, presentation of causal relationships includes the statistical association between exposure to one xenobiotic and the disease corrected for the effect of potential confounders. Within the FP6 project HENVINET, we aimed at considering together all known agents and mechanisms involved in development of selected cancer types. Selection of cancer types for causal diagrams was based on the corpus of available data and reported relative risk (RR). In constructing causal diagrams the complexity of the interactions between xenobiotics was considered a priority in the interpretation of cancer risk. Additionally, gene-environment interactions were incorporated such as polymorphisms in genes for repair and for phase I and II enzymes involved in metabolism of xenobiotics and their elimination. Information on possible age or gender susceptibility is also included. Diagrams are user friendly thanks to multistep access to information packages and the possibility of referring to related literature and a glossary of terms. Diagrams cover both chemical and physical agents (ionizing and non-ionizing radiation) and provide basic information on the strength of the association between type of exposure and cancer risk reported by human studies and supported by mechanistic studies. Causal diagrams developed within HENVINET project represent a valuable source of information for professionals working in the field of environmental health and epidemiology, and as educational material for students. Cancer risk results from a complex interaction of environmental exposures with inherited gene polymorphisms, genetic burden collected during development and non genomic capacity of response to environmental insults. In order to adopt effective preventive measures and the associated regulatory actions, a comprehensive investigation of cancer etiology is crucial. Variations and fluctuations of cancer incidence in human populations do not necessarily reflect environmental pollution policies or population distribution of polymorphisms of genes known to be associated with increased cancer risk. Tools which may be used in such a comprehensive research, including molecular biology applied to field studies, require a methodological shift from the reductionism that has been used until recently as a basic axiom in interpretation of data. The complexity of the interactions between cells, genes and the environment, i.e. the resonance of the living matter with the environment, can be synthesized by systems biology. Within the HENVINET project such philosophy was followed in order to develop interactive causal diagrams for the investigation of cancers with possible etiology in environmental exposure. Causal diagrams represent integrated knowledge and seed tool for their future development and development of similar diagrams for other environmentally related diseases such as asthma or sterility. In this paper development and application of causal diagrams for cancer are presented and discussed.
Lamontagne, Maxime; Timens, Wim; Hao, Ke; Bossé, Yohan; Laviolette, Michel; Steiling, Katrina; Campbell, Joshua D; Couture, Christian; Conti, Massimo; Sherwood, Karen; Hogg, James C; Brandsma, Corry-Anke; van den Berge, Maarten; Sandford, Andrew; Lam, Stephen; Lenburg, Marc E; Spira, Avrum; Paré, Peter D; Nickle, David; Sin, Don D; Postma, Dirkje S
2014-11-01
COPD is a complex chronic disease with poorly understood pathogenesis. Integrative genomic approaches have the potential to elucidate the biological networks underlying COPD and lung function. We recently combined genome-wide genotyping and gene expression in 1111 human lung specimens to map expression quantitative trait loci (eQTL). To determine causal associations between COPD and lung function-associated single nucleotide polymorphisms (SNPs) and lung tissue gene expression changes in our lung eQTL dataset. We evaluated causality between SNPs and gene expression for three COPD phenotypes: FEV(1)% predicted, FEV(1)/FVC and COPD as a categorical variable. Different models were assessed in the three cohorts independently and in a meta-analysis. SNPs associated with a COPD phenotype and gene expression were subjected to causal pathway modelling and manual curation. In silico analyses evaluated functional enrichment of biological pathways among newly identified causal genes. Biologically relevant causal genes were validated in two separate gene expression datasets of lung tissues and bronchial airway brushings. High reliability causal relations were found in SNP-mRNA-phenotype triplets for FEV(1)% predicted (n=169) and FEV(1)/FVC (n=80). Several genes of potential biological relevance for COPD were revealed. eQTL-SNPs upregulating cystatin C (CST3) and CD22 were associated with worse lung function. Signalling pathways enriched with causal genes included xenobiotic metabolism, apoptosis, protease-antiprotease and oxidant-antioxidant balance. By using integrative genomics and analysing the relationships of COPD phenotypes with SNPs and gene expression in lung tissue, we identified CST3 and CD22 as potential causal genes for airflow obstruction. This study also augmented the understanding of previously described COPD pathways. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Marken, Richard S; Horth, Brittany
2011-06-01
Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.
Graphical function mapping as a new way to explore cause-and-effect chains
Evans, Mary Anne
2016-01-01
Graphical function mapping provides a simple method for improving communication within interdisciplinary research teams and between scientists and nonscientists. This article introduces graphical function mapping using two examples and discusses its usefulness. Function mapping projects the outcome of one function into another to show the combined effect. Using this mathematical property in a simpler, even cartoon-like, graphical way allows the rapid combination of multiple information sources (models, empirical data, expert judgment, and guesses) in an intuitive visual to promote further discussion, scenario development, and clear communication.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Inferring causal molecular networks: empirical assessment through a community-based effort
Hill, Steven M.; Heiser, Laura M.; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K.; Carlin, Daniel E.; Zhang, Yang; Sokolov, Artem; Paull, Evan O.; Wong, Chris K.; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V.; Favorov, Alexander V.; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W.; Long, Byron L.; Noren, David P.; Bisberg, Alexander J.; Mills, Gordon B.; Gray, Joe W.; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A.; Fertig, Elana J.; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M.; Spellman, Paul T.; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach
2016-01-01
Inferring molecular networks is a central challenge in computational biology. However, it has remained unclear whether causal, rather than merely correlational, relationships can be effectively inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge that focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results constitute the most comprehensive assessment of causal network inference in a mammalian setting carried out to date and suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess the causal validity of inferred molecular networks. PMID:26901648
NASA Astrophysics Data System (ADS)
Pearl, Judea
2000-03-01
Written by one of the pre-eminent researchers in the field, this book provides a comprehensive exposition of modern analysis of causation. It shows how causality has grown from a nebulous concept into a mathematical theory with significant applications in the fields of statistics, artificial intelligence, philosophy, cognitive science, and the health and social sciences. Pearl presents a unified account of the probabilistic, manipulative, counterfactual and structural approaches to causation, and devises simple mathematical tools for analyzing the relationships between causal connections, statistical associations, actions and observations. The book will open the way for including causal analysis in the standard curriculum of statistics, artifical intelligence, business, epidemiology, social science and economics. Students in these areas will find natural models, simple identification procedures, and precise mathematical definitions of causal concepts that traditional texts have tended to evade or make unduly complicated. This book will be of interest to professionals and students in a wide variety of fields. Anyone who wishes to elucidate meaningful relationships from data, predict effects of actions and policies, assess explanations of reported events, or form theories of causal understanding and causal speech will find this book stimulating and invaluable.
ERIC Educational Resources Information Center
Jensen, Eva
2014-01-01
If students really understand the systems they study, they would be able to tell how changes in the system would affect a result. This demands that the students understand the mechanisms that drive its behaviour. The study investigates potential merits of learning how to explicitly model the causal structure of systems. The approach and…
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...
2016-12-01
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
Behavior systems and reinforcement: an integrative approach.
Timberlake, W
1993-01-01
Most traditional conceptions of reinforcement are based on a simple causal model in which responding is strengthened by the presentation of a reinforcer. I argue that reinforcement is better viewed as the outcome of constraint of a functioning causal system comprised of multiple interrelated causal sequences, complex linkages between causes and effects, and a set of initial conditions. Using a simplified system conception of the reinforcement situation, I review the similarities and drawbacks of traditional reinforcement models and analyze the recent contributions of cognitive, regulatory, and ecological approaches. Finally, I show how the concept of behavior systems can begin to incorporate both traditional and recent conceptions of reinforcement in an integrative approach. PMID:8354963
Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P
1994-02-01
We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and (2) they help the user understand how different energy terms interact to stabilize a given conformation. The Sculpt paradigm combines many of the best features of interactive graphical modeling, energy minimization, and actual physical models, and we propose it as an especially productive way to use current and future increases in computer speed.
VORTAB - A data-tablet method of developing input data for the VORLAX program
NASA Technical Reports Server (NTRS)
Denn, F. M.
1979-01-01
A method of developing an input data file for use in the aerodynamic analysis of a complete airplane with the VORLAX computer program is described. The hardware consists of an interactive graphics terminal equipped with a graphics tablet. Software includes graphics routines from the Tektronix PLOT 10 package as well as the VORTAB program described. The user determines the size and location of each of the major panels for the aircraft before using the program. Data is entered both from the terminal keyboard and the graphics tablet. The size of the resulting data file is dependent on the complexity of the model and can vary from ten to several hundred card images. After the data are entered, two programs READB and PLOTB, are executed which plot the configuration allowing visual inspection of the model.
Bhui, Kamaldeep; Bhugra, Dinesh; Goldberg, David
2002-01-01
The literature on the primary care assessment of mental distress among Indian subcontinent origin patients suggests frequent presentations to general practitioner, but rarely for recognisable psychiatric disorders. This study investigates whether cultural variations in patients' causal explanatory models account for cultural variations in the assessment of non-psychotic mental disorders in primary care. In a two-phase survey, 272 Punjabi and 269 English subjects were screened. The second phase was completed by 209 and 180 subjects, respectively. Causal explanatory models were elicited as explanations of two vignette scenarios. One of these emphasised a somatic presentation and the other anxiety symptoms. Psychiatric disorder was assessed by GPs on a Likert scale and by a psychiatrist on the Clinical Interview Schedule. Punjabis more commonly expressed medical/somatic and religious beliefs. General practitioners were more likely to assess any subject giving psychological explanations to vignette A and English subjects giving religious explanations to vignette B as having a significant psychiatric disorder. Where medical/somatic explanations of distress were most prevalent in response to the somatic vignette, psychological, religious and work explanations were less prevalent among Punjabis but not among English subjects. Causal explanations did not fully explain cultural differences in assessments. General practitioners' assessments and causal explanations are related and influenced by culture, but causal explanations do not fully explain cultural differences in assessments.
Roelstraete, Bjorn; Rosseel, Yves
2012-04-30
Partial Granger causality was introduced by Guo et al. (2008) who showed that it could better eliminate the influence of latent variables and exogenous inputs than conditional G-causality. In the recent literature we can find some reviews and applications of this type of Granger causality (e.g. Smith et al., 2011; Bressler and Seth, 2010; Barrett et al., 2010). These articles apparently do not take into account a serious flaw in the original work on partial G-causality, being the negative F values that were reported and even proven to be plausible. In our opinion, this undermines the credibility of the obtained results and thus the validity of the approach. Our study is aimed to further validate partial G-causality and to find an answer why negative partial Granger causality estimates were reported. Time series were simulated from the same toy model as used in the original paper and partial and conditional causal measures were compared in the presence of confounding variables. Inference was done parametrically and using non-parametric block bootstrapping. We counter the proof that partial Granger F values can be negative, but the main conclusion of the original article remains. In the presence of unknown latent and exogenous influences, it appears that partial G-causality will better eliminate their influence than conditional G-causality, at least when non-parametric inference is used. Copyright © 2012 Elsevier B.V. All rights reserved.
Causality Analysis of fMRI Data Based on the Directed Information Theory Framework.
Wang, Zhe; Alahmadi, Ahmed; Zhu, David C; Li, Tongtong
2016-05-01
This paper aims to conduct fMRI-based causality analysis in brain connectivity by exploiting the directed information (DI) theory framework. Unlike the well-known Granger causality (GC) analysis, which relies on the linear prediction technique, the DI theory framework does not have any modeling constraints on the sequences to be evaluated and ensures estimation convergence. Moreover, it can be used to generate the GC graphs. In this paper, first, we introduce the core concepts in the DI framework. Second, we present how to conduct causality analysis using DI measures between two time series. We provide the detailed procedure on how to calculate the DI for two finite-time series. The two major steps involved here are optimal bin size selection for data digitization and probability estimation. Finally, we demonstrate the applicability of DI-based causality analysis using both the simulated data and experimental fMRI data, and compare the results with that of the GC analysis. Our analysis indicates that GC analysis is effective in detecting linear or nearly linear causal relationship, but may have difficulty in capturing nonlinear causal relationships. On the other hand, DI-based causality analysis is more effective in capturing both linear and nonlinear causal relationships. Moreover, it is observed that brain connectivity among different regions generally involves dynamic two-way information transmissions between them. Our results show that when bidirectional information flow is present, DI is more effective than GC to quantify the overall causal relationship.
Human sense utilization method on real-time computer graphics
NASA Astrophysics Data System (ADS)
Maehara, Hideaki; Ohgashi, Hitoshi; Hirata, Takao
1997-06-01
We are developing an adjustment method of real-time computer graphics, to obtain effective ones which give audience various senses intended by producer, utilizing human sensibility technologically. Generally, production of real-time computer graphics needs much adjustment of various parameters, such as 3D object models/their motions/attributes/view angle/parallax etc., in order that the graphics gives audience superior effects as reality of materials, sense of experience and so on. And it is also known it costs much to adjust such various parameters by trial and error. A graphics producer often evaluates his graphics to improve it. For example, it may lack 'sense of speed' or be necessary to be given more 'sense of settle down,' to improve it. On the other hand, we can know how the parameters in computer graphics affect such senses by means of statistically analyzing several samples of computer graphics which provide different senses. We paid attention to these two facts, so that we designed an adjustment method of the parameters by inputting phases of sense into a computer. By the way of using this method, it becomes possible to adjust real-time computer graphics more effectively than by conventional way of trial and error.
Iturria-Medina, Yasser; Carbonell, Félix M; Sotero, Roberto C; Chouinard-Decorte, Francois; Evans, Alan C
2017-05-15
Generative models focused on multifactorial causal mechanisms in brain disorders are scarce and generally based on limited data. Despite the biological importance of the multiple interacting processes, their effects remain poorly characterized from an integrative analytic perspective. Here, we propose a spatiotemporal multifactorial causal model (MCM) of brain (dis)organization and therapeutic intervention that accounts for local causal interactions, effects propagation via physical brain networks, cognitive alterations, and identification of optimum therapeutic interventions. In this article, we focus on describing the model and applying it at the population-based level for studying late onset Alzheimer's disease (LOAD). By interrelating six different neuroimaging modalities and cognitive measurements, this model accurately predicts spatiotemporal alterations in brain amyloid-β (Aβ) burden, glucose metabolism, vascular flow, resting state functional activity, structural properties, and cognitive integrity. The results suggest that a vascular dysregulation may be the most-likely initial pathologic event leading to LOAD. Nevertheless, they also suggest that LOAD it is not caused by a unique dominant biological factor (e.g. vascular or Aβ) but by the complex interplay among multiple relevant direct interactions. Furthermore, using theoretical control analysis of the identified population-based multifactorial causal network, we show the crucial advantage of using combinatorial over single-target treatments, explain why one-target Aβ based therapies might fail to improve clinical outcomes, and propose an efficiency ranking of possible LOAD interventions. Although still requiring further validation at the individual level, this work presents the first analytic framework for dynamic multifactorial brain (dis)organization that may explain both the pathologic evolution of progressive neurological disorders and operationalize the influence of multiple interventional strategies. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Whitman, David L.; Terry, Ronald E.
1985-01-01
Demonstrating petroleum engineering concepts in undergraduate laboratories often requires expensive and time-consuming experiments. To eliminate these problems, a graphical simulation technique was developed for junior-level laboratories which illustrate vapor-liquid equilibrium and the use of mathematical modeling. A description of this…
Nguyen, Huong Thi Thu; Kitaoka, Kazuyo; Sukigara, Masune; Thai, Anh Lan
2018-03-01
This study aimed to create a Vietnamese version of both the Maslach Burnout Inventory-General Survey (MBI-GS) and Areas of Worklife Scale (AWS) to assess the burnout state of Vietnamese clinical nurses and to develop a causal model of burnout of clinical nurses. We conducted a descriptive design using a cross-sectional survey. The questionnaire was hand divided directly by nursing departments to 500 clinical nurses in three hospitals. Vietnamese MBI-GS and AWS were then examined for reliability and validity. We used the revised exhaustion +1 burnout classification to access burnout state. We performed path analysis to develop a Vietnamese causal model based on the original model by Leiter and Maslach's theory. We found that both scales were reliable and valid for assessing burnout. Among nurse participants, the percentage of severe burnout was 0.7% and burnout was 15.8%, and 17.2% of nurses were exhausted. The best predictor of burnout was "on-duty work schedule" that clinical nurses have to work for 24 hours. In the causal model, we also found similarity and difference pathways in comparison with the original model. Vietnamese MBI-GS and AWS were applicable to research on occupational stress. Nearly one-fifth of Vietnamese clinical nurses were working in burnout state. The causal model suggested a range of factors resulting in burnout, and it is necessary to consider the specific solution to prevent burnout problem. Copyright © 2018. Published by Elsevier B.V.
Automatic programming of simulation models
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.
1990-01-01
The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.
Causal inference and longitudinal data: a case study of religion and mental health.
VanderWeele, Tyler J; Jackson, John W; Li, Shanshan
2016-11-01
We provide an introduction to causal inference with longitudinal data and discuss the complexities of analysis and interpretation when exposures can vary over time. We consider what types of causal questions can be addressed with the standard regression-based analyses and what types of covariate control and control for the prior values of outcome and exposure must be made to reason about causal effects. We also consider newer classes of causal models, including marginal structural models, that can assess questions of the joint effects of time-varying exposures and can take into account feedback between the exposure and outcome over time. Such feedback renders cross-sectional data ineffective for drawing inferences about causation. The challenges are illustrated by analyses concerning potential effects of religious service attendance on depression, in which there may in fact be effects in both directions with service attendance preventing the subsequent depression, but depression itself leading to lower levels of the subsequent religious service attendance. Longitudinal designs, with careful control for prior exposures, outcomes, and confounders, and suitable methodology, will strengthen research on mental health, religion and health, and in the biomedical and social sciences generally.
[The dual process model of addiction. Towards an integrated model?].
Vandermeeren, R; Hebbrecht, M
2012-01-01
Neurobiology and cognitive psychology have provided us with a dual process model of addiction. According to this model, behavior is considered to be the dynamic result of a combination of automatic and controlling processes. In cases of addiction the balance between these two processes is severely disturbed. Automated processes will continue to produce impulses that ensure the continuance of addictive behavior. Weak, reflective or controlling processes are both the reason for and the result of the inability to forgo addiction. To identify features that are common to current neurocognitive insights into addiction and psychodynamic views on addiction. The picture that emerges from research is not clear. There is some evidence that attentional bias has a causal effect on addiction. There is no evidence that automatic associations have a causal effect, but there is some evidence that automatic action-tendencies do have a causal effect. Current neurocognitive views on the dual process model of addiction can be integrated with an evidence-based approach to addiction and with psychodynamic views on addiction.
Update: Advancement of Contact Dynamics Modeling for Human Spaceflight Simulation Applications
NASA Technical Reports Server (NTRS)
Brain, Thomas A.; Kovel, Erik B.; MacLean, John R.; Quiocho, Leslie J.
2017-01-01
Pong is a new software tool developed at the NASA Johnson Space Center that advances interference-based geometric contact dynamics based on 3D graphics models. The Pong software consists of three parts: a set of scripts to extract geometric data from 3D graphics models, a contact dynamics engine that provides collision detection and force calculations based on the extracted geometric data, and a set of scripts for visualizing the dynamics response with the 3D graphics models. The contact dynamics engine can be linked with an external multibody dynamics engine to provide an integrated multibody contact dynamics simulation. This paper provides a detailed overview of Pong including the overall approach and modeling capabilities, which encompasses force generation from contact primitives and friction to computational performance. Two specific Pong-based examples of International Space Station applications are discussed, and the related verification and validation using this new tool are also addressed.
A Bayesian Approach to Surrogacy Assessment Using Principal Stratification in Clinical Trials
Li, Yun; Taylor, Jeremy M.G.; Elliott, Michael R.
2011-01-01
Summary A surrogate marker (S) is a variable that can be measured earlier and often easier than the true endpoint (T) in a clinical trial. Most previous research has been devoted to developing surrogacy measures to quantify how well S can replace T or examining the use of S in predicting the effect of a treatment (Z). However, the research often requires one to fit models for the distribution of T given S and Z. It is well known that such models do not have causal interpretations because the models condition on a post-randomization variable S. In this paper, we directly model the relationship among T, S and Z using a potential outcomes framework introduced by Frangakis and Rubin (2002). We propose a Bayesian estimation method to evaluate the causal probabilities associated with the cross-classification of the potential outcomes of S and T when S and T are both binary. We use a log-linear model to directly model the association between the potential outcomes of S and T through the odds ratios. The quantities derived from this approach always have causal interpretations. However, this causal model is not identifiable from the data without additional assumptions. To reduce the non-identifiability problem and increase the precision of statistical inferences, we assume monotonicity and incorporate prior belief that is plausible in the surrogate context by using prior distributions. We also explore the relationship among the surrogacy measures based on traditional models and this counterfactual model. The method is applied to the data from a glaucoma treatment study. PMID:19673864
Antibiotics and antibiotic resistance in agroecosystems: State of the science
USDA-ARS?s Scientific Manuscript database
We propose a simple causal model depicting relationships involved in dissemination of antibiotics and antibiotic resistance in agroecosystems and potential effects on human health, functioning of natural ecosystems, and agricultural productivity. Available evidence for each causal link is briefly su...
Urbanism and Life Satisfaction among the Aged.
ERIC Educational Resources Information Center
Liang, Jersey; Warfel, Becky L.
1983-01-01
Examined the impact of urbanism on the causal mechanisms by which life satisfaction is determined using a causal model that incorporates urbanism as a polytomous variable. Urbanism was found to have indirect main effects as well as interaction effects on life satisfaction. (Author/JAC)
Li, Haojie; Graham, Daniel J
2016-08-01
This paper estimates the causal effect of 20mph zones on road casualties in London. Potential confounders in the key relationship of interest are included within outcome regression and propensity score models, and the models are then combined to form a doubly robust estimator. A total of 234 treated zones and 2844 potential control zones are included in the data sample. The propensity score model is used to select a viable control group which has common support in the covariate distributions. We compare the doubly robust estimates with those obtained using three other methods: inverse probability weighting, regression adjustment, and propensity score matching. The results indicate that 20mph zones have had a significant causal impact on road casualty reduction in both absolute and proportional terms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Penalized regression procedures for variable selection in the potential outcomes framework
Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L.
2015-01-01
A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple ‘impute, then select’ class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems, and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data and imputation are drawn. A difference LASSO algorithm is defined, along with its multiple imputation analogues. The procedures are illustrated using a well-known right heart catheterization dataset. PMID:25628185
Span graphics display utilities handbook, first edition
NASA Technical Reports Server (NTRS)
Gallagher, D. L.; Green, J. L.; Newman, R.
1985-01-01
The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
CONFIG: Qualitative simulation tool for analyzing behavior of engineering devices
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.; Harris, Richard A.
1987-01-01
To design failure management expert systems, engineers mentally analyze the effects of failures and procedures as they propagate through device configurations. CONFIG is a generic device modeling tool for use in discrete event simulation, to support such analyses. CONFIG permits graphical modeling of device configurations and qualitative specification of local operating modes of device components. Computation requirements are reduced by focussing the level of component description on operating modes and failure modes, and specifying qualitative ranges of variables relative to mode transition boundaries. Simulation processing occurs only when modes change or variables cross qualitative boundaries. Device models are built graphically, using components from libraries. Components are connected at ports by graphical relations that define data flow. The core of a component model is its state transition diagram, which specifies modes of operation and transitions among them.
Cox, L A; Ricci, P F
2005-04-01
Causal inference of exposure-response relations from data is a challenging aspect of risk assessment with important implications for public and private risk management. Such inference, which is fundamentally empirical and based on exposure (or dose)-response models, seldom arises from a single set of data; rather, it requires integrating heterogeneous information from diverse sources and disciplines including epidemiology, toxicology, and cell and molecular biology. The causal aspects we discuss focus on these three aspects: drawing sound inferences about causal relations from one or more observational studies; addressing and resolving biases that can affect a single multivariate empirical exposure-response study; and applying the results from these considerations to the microbiological risk management of human health risks and benefits of a ban on antibiotic use in animals, in the context of banning enrofloxacin or macrolides, antibiotics used against bacterial illnesses in poultry, and the effects of such bans on changing the risk of human food-borne campylobacteriosis infections. The purposes of this paper are to describe novel causal methods for assessing empirical causation and inference; exemplify how to deal with biases that routinely arise in multivariate exposure- or dose-response modeling; and provide a simplified discussion of a case study of causal inference using microbial risk analysis as an example. The case study supports the conclusion that the human health benefits from a ban are unlikely to be greater than the excess human health risks that it could create, even when accounting for uncertainty. We conclude that quantitative causal analysis of risks is a preferable to qualitative assessments because it does not involve unjustified loss of information and is sound under the inferential use of risk results by management.
ERIC Educational Resources Information Center
Lefevre, Pierre; de Suremain, Charles-Edouard; Rubin de Celis, Emma; Sejas, Edgar
2004-01-01
The paper discusses the utility of constructing causal models in focus groups. This was experienced as a complement to an in-depth ethnographic research on the differing perceptions of caretakers and health professionals on child's growth and development in Peru and Bolivia. The rational, advantages, difficulties and necessary adaptations of…
Causes and correlations in cambium phenology: towards an integrated framework of xylogenesis.
Rossi, Sergio; Morin, Hubert; Deslauriers, Annie
2012-03-01
Although habitually considered as a whole, xylogenesis is a complex process of division and maturation of a pool of cells where the relationship between the phenological phases generating such a growth pattern remains essentially unknown. This study investigated the causal relationships in cambium phenology of black spruce [Picea mariana (Mill.) BSP] monitored for 8 years on four sites of the boreal forest of Quebec, Canada. The dependency links connecting the timing of xylem cell differentiation and cell production were defined and the resulting causal model was analysed with d-sep tests and generalized mixed models with repeated measurements, and tested with Fisher's C statistics to determine whether and how causality propagates through the measured variables. The higher correlations were observed between the dates of emergence of the first developing cells and between the ending of the differentiation phases, while the number of cells was significantly correlated with all phenological phases. The model with eight dependency links was statistically valid for explaining the causes and correlations between the dynamics of cambium phenology. Causal modelling suggested that the phenological phases involved in xylogenesis are closely interconnected by complex relationships of cause and effect, with the onset of cell differentiation being the main factor directly or indirectly triggering all successive phases of xylem maturation.
Causes and correlations in cambium phenology: towards an integrated framework of xylogenesis
Rossi, Sergio; Morin, Hubert; Deslauriers, Annie
2012-01-01
Although habitually considered as a whole, xylogenesis is a complex process of division and maturation of a pool of cells where the relationship between the phenological phases generating such a growth pattern remains essentially unknown. This study investigated the causal relationships in cambium phenology of black spruce [Picea mariana (Mill.) BSP] monitored for 8 years on four sites of the boreal forest of Quebec, Canada. The dependency links connecting the timing of xylem cell differentiation and cell production were defined and the resulting causal model was analysed with d-sep tests and generalized mixed models with repeated measurements, and tested with Fisher’s C statistics to determine whether and how causality propagates through the measured variables. The higher correlations were observed between the dates of emergence of the first developing cells and between the ending of the differentiation phases, while the number of cells was significantly correlated with all phenological phases. The model with eight dependency links was statistically valid for explaining the causes and correlations between the dynamics of cambium phenology. Causal modelling suggested that the phenological phases involved in xylogenesis are closely interconnected by complex relationships of cause and effect, with the onset of cell differentiation being the main factor directly or indirectly triggering all successive phases of xylem maturation. PMID:22174441
The good, the bad, and the timely: how temporal order and moral judgment influence causal selection
Reuter, Kevin; Kirfel, Lara; van Riel, Raphael; Barlassina, Luca
2014-01-01
Causal selection is the cognitive process through which one or more elements in a complex causal structure are singled out as actual causes of a certain effect. In this paper, we report on an experiment in which we investigated the role of moral and temporal factors in causal selection. Our results are as follows. First, when presented with a temporal chain in which two human agents perform the same action one after the other, subjects tend to judge the later agent to be the actual cause. Second, the impact of temporal location on causal selection is almost canceled out if the later agent did not violate a norm while the former did. We argue that this is due to the impact that judgments of norm violation have on causal selection—even if the violated norm has nothing to do with the obtaining effect. Third, moral judgments about the effect influence causal selection even in the case in which agents could not have foreseen the effect and did not intend to bring it about. We discuss our findings in connection to recent theories of the role of moral judgment in causal reasoning, on the one hand, and to probabilistic models of temporal location, on the other. PMID:25477851
Giang, Kim Bao; Chung, Le Hong; Minh, Hoang Van; Kien, Vu Duy; Giap, Vu Van; Hinh, Nguyen Duc; Cuong, Nguyen Manh; Manh, Pham Duc; Duc, Ha Anh; Yang, Jui-Chen
2016-01-01
Graphic health warnings (GHW) on tobacco packages have proven to be effective in increasing quit attempts among smokers and reducing initial smoking among adolescents. This research aimed to examine the relative importance of different attributes of graphic health warnings on tobacco packages in Viet Nam. A discrete choice experimental (DCE) design was applied with a conditional logit model. In addition, a ranking method was used to list from the least to the most dreadful GHW labels. With the results from DCE model, graphic type was shown to be the most important attribute, followed by cost and coverage area of GHW. The least important attribute was position of the GHW. Among 5 graphic types (internal lung cancer image, external damaged teeth, abstract image, human suffering image and text), the image of lung cancer was found to have the strongest influence on both smokers and non-smokers. With ranking method, the image of throat cancer and heart diseases were considered the most dreadful images. GHWs should be designed with these attributes in mind, to maximise influence on purchase among both smokers and non-smokers.
The Analysis of the Contribution of Human Factors to the In-Flight Loss of Control Accidents
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2012-01-01
In-flight loss of control (LOC) is currently the leading cause of fatal accidents based on various commercial aircraft accident statistics. As the Next Generation Air Transportation System (NextGen) emerges, new contributing factors leading to LOC are anticipated. The NASA Aviation Safety Program (AvSP), along with other aviation agencies and communities are actively developing safety products to mitigate the LOC risk. This paper discusses the approach used to construct a generic integrated LOC accident framework (LOCAF) model based on a detailed review of LOC accidents over the past two decades. The LOCAF model is comprised of causal factors from the domain of human factors, aircraft system component failures, and atmospheric environment. The multiple interdependent causal factors are expressed in an Object-Oriented Bayesian belief network. In addition to predicting the likelihood of LOC accident occurrence, the system-level integrated LOCAF model is able to evaluate the impact of new safety technology products developed in AvSP. This provides valuable information to decision makers in strategizing NASA's aviation safety technology portfolio. The focus of this paper is on the analysis of human causal factors in the model, including the contributions from flight crew and maintenance workers. The Human Factors Analysis and Classification System (HFACS) taxonomy was used to develop human related causal factors. The preliminary results from the baseline LOCAF model are also presented.
The Causal Foundations of Structural Equation Modeling
2012-02-16
and Baumrind (1993).” This, together with the steady influx of statisticians into the field, has left SEM re- searchers in a quandary about the...considerations. Journal of Personality and Social Psychology 51 1173–1182. Baumrind , D. (1993). Specious causal attributions in social sciences: The
NASA Technical Reports Server (NTRS)
Hudlicka, Eva; Corker, Kevin
1988-01-01
In this paper, a problem-solving system which uses a multilevel causal model of its domain is described. The system functions in the role of a pilot's assistant in the domain of commercial air transport emergencies. The model represents causal relationships among the aircraft subsystems, the effectors (engines, control surfaces), the forces that act on an aircraft in flight (thrust, lift), and the aircraft's flight profile (speed, altitude, etc.). The causal relationships are represented at three levels of abstraction: Boolean, qualitative, and quantitative, and reasoning about causes and effects can take place at each of these levels. Since processing at each level has different characteristics with respect to speed, the type of data required, and the specificity of the results, the problem-solving system can adapt to a wide variety of situations. The system is currently being implemented in the KEE(TM) development environment on a Symbolics Lisp machine.
NASA Technical Reports Server (NTRS)
Nelson, D. P.
1981-01-01
A graphical presentation of the aerodynamic data acquired during coannular nozzle performance wind tunnel tests is given. The graphical data consist of plots of nozzle gross thrust coefficient, fan nozzle discharge coefficient, and primary nozzle discharge coefficient. Normalized model component static pressure distributions are presented as a function of primary total pressure, fan total pressure, and ambient static pressure for selected operating conditions. In addition, the supersonic cruise configuration data include plots of nozzle efficiency and secondary-to-fan total pressure pumping characteristics. Supersonic and subsonic cruise data are given.
Computer animation of modal and transient vibrations
NASA Technical Reports Server (NTRS)
Lipman, Robert R.
1987-01-01
An interactive computer graphics processor is described that is capable of generating input to animate modal and transient vibrations of finite element models on an interactive graphics system. The results from NASTRAN can be postprocessed such that a three dimensional wire-frame picture, in perspective, of the finite element mesh is drawn on the graphics display. Modal vibrations of any mode shape or transient motions over any range of steps can be animated. The finite element mesh can be color-coded by any component of displacement. Viewing parameters and the rate of vibration of the finite element model can be interactively updated while the structure is vibrating.
Software for Data Analysis with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Roy, H. Scott
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
NASA Technical Reports Server (NTRS)
Apodaca, Tony; Porter, Tom
1989-01-01
The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.
Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong
2017-08-15
Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Surface and deep structures in graphics comprehension.
Schnotz, Wolfgang; Baadte, Christiane
2015-05-01
Comprehension of graphics can be considered as a process of schema-mediated structure mapping from external graphics on internal mental models. Two experiments were conducted to test the hypothesis that graphics possess a perceptible surface structure as well as a semantic deep structure both of which affect mental model construction. The same content was presented to different groups of learners by graphics from different perspectives with different surface structures but the same deep structure. Deep structures were complementary: major features of the learning content in one experiment became minor features in the other experiment, and vice versa. Text was held constant. Participants were asked to read, understand, and memorize the learning material. Furthermore, they were either instructed to process the material from the perspective supported by the graphic or from an alternative perspective, or they received no further instruction. After learning, they were asked to recall the learning content from different perspectives by completing graphs of different formats as accurately as possible. Learners' recall was more accurate if the format of recall was the same as the learning format which indicates surface structure influences. However, participants also showed more accurate recall when they remembered the content from a perspective emphasizing the deep structure, regardless of the graphics format presented before. This included better recall of what they had not seen than of what they really had seen before. That is, deep structure effects overrode surface effects. Depending on context conditions, stimulation of additional cognitive processing by instruction had partially positive and partially negative effects.
ERIC Educational Resources Information Center
de Rooij, Mark; Heiser, Willem J.
2005-01-01
Although RC(M)-association models have become a generally useful tool for the analysis of cross-classified data, the graphical representation resulting from such an analysis can at times be misleading. The relationships present between row category points and column category points cannot be interpreted by inter point distances but only through…
A Monthly Water-Balance Model Driven By a Graphical User Interface
McCabe, Gregory J.; Markstrom, Steven L.
2007-01-01
This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.
Rinnan, Asmund; Bruun, Sander; Lindedam, Jane; ...
2017-02-07
Here, the combination of NIR spectroscopy and chemometrics is a powerful correlation method for predicting the chemical constituents in biological matrices, such as the glucose and xylose content of straw. However, difficulties arise when it comes to predicting enzymatic glucose and xylose release potential, which is matrix dependent. Further complications are caused by xylose and glucose release potential being highly intercorrelated. This study emphasizes the importance of understanding the causal relationship between the model and the constituent of interest. It investigates the possibility of using near-infrared spectroscopy to evaluate the ethanol potential of wheat straw by analyzing more than 1000more » samples from different wheat varieties and growth conditions. During the calibration model development, the prime emphasis was to investigate the correlation structure between the two major quality traits for saccharification of wheat straw: glucose and xylose release. The large sample set enabled a versatile and robust calibration model to be developed, showing that the prediction model for xylose release is based on a causal relationship with the NIR spectral data. In contrast, the prediction of glucose release was found to be highly dependent on the intercorrelation with xylose release. If this correlation is broken, the model performance breaks down. A simple method was devised for avoiding this breakdown and can be applied to any large dataset for investigating the causality or lack of causality of a prediction model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rinnan, Asmund; Bruun, Sander; Lindedam, Jane
Here, the combination of NIR spectroscopy and chemometrics is a powerful correlation method for predicting the chemical constituents in biological matrices, such as the glucose and xylose content of straw. However, difficulties arise when it comes to predicting enzymatic glucose and xylose release potential, which is matrix dependent. Further complications are caused by xylose and glucose release potential being highly intercorrelated. This study emphasizes the importance of understanding the causal relationship between the model and the constituent of interest. It investigates the possibility of using near-infrared spectroscopy to evaluate the ethanol potential of wheat straw by analyzing more than 1000more » samples from different wheat varieties and growth conditions. During the calibration model development, the prime emphasis was to investigate the correlation structure between the two major quality traits for saccharification of wheat straw: glucose and xylose release. The large sample set enabled a versatile and robust calibration model to be developed, showing that the prediction model for xylose release is based on a causal relationship with the NIR spectral data. In contrast, the prediction of glucose release was found to be highly dependent on the intercorrelation with xylose release. If this correlation is broken, the model performance breaks down. A simple method was devised for avoiding this breakdown and can be applied to any large dataset for investigating the causality or lack of causality of a prediction model.« less
Retrieving hydrological connectivity from empirical causality in karst systems
NASA Astrophysics Data System (ADS)
Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier
2017-04-01
Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.
Instrumental variables as bias amplifiers with general outcome and confounding.
Ding, P; VanderWeele, T J; Robins, J M
2017-06-01
Drawing causal inference with observational studies is the central pillar of many disciplines. One sufficient condition for identifying the causal effect is that the treatment-outcome relationship is unconfounded conditional on the observed covariates. It is often believed that the more covariates we condition on, the more plausible this unconfoundedness assumption is. This belief has had a huge impact on practical causal inference, suggesting that we should adjust for all pretreatment covariates. However, when there is unmeasured confounding between the treatment and outcome, estimators adjusting for some pretreatment covariate might have greater bias than estimators without adjusting for this covariate. This kind of covariate is called a bias amplifier, and includes instrumental variables that are independent of the confounder, and affect the outcome only through the treatment. Previously, theoretical results for this phenomenon have been established only for linear models. We fill in this gap in the literature by providing a general theory, showing that this phenomenon happens under a wide class of models satisfying certain monotonicity assumptions. We further show that when the treatment follows an additive or multiplicative model conditional on the instrumental variable and the confounder, these monotonicity assumptions can be interpreted as the signs of the arrows of the causal diagrams.
Derakhshanrad, Seyed Alireza; Piven, Emily; Ghoochani, Bahareh Zeynalzadeh
2017-10-01
Walter J. Freeman pioneered the neurodynamic model of brain activity when he described the brain dynamics for cognitive information transfer as the process of circular causality at intention, meaning, and perception (IMP) levels. This view contributed substantially to establishment of the Intention, Meaning, and Perception Model of Neuro-occupation in occupational therapy. As described by the model, IMP levels are three components of the brain dynamics system, with nonlinear connections that enable cognitive function to be processed in a circular causality fashion, known as Cognitive Process of Circular Causality (CPCC). Although considerable research has been devoted to study the brain dynamics by sophisticated computerized imaging techniques, less attention has been paid to study it through investigating the adaptation process of thoughts and behaviors. To explore how CPCC manifested thinking and behavioral patterns, a qualitative case study was conducted on two matched female participants with strokes, who were of comparable ages, affected sides, and other characteristics, except for their resilience and motivational behaviors. CPCC was compared by matrix analysis between two participants, using content analysis with pre-determined categories. Different patterns of thinking and behavior may have happened, due to disparate regulation of CPCC between two participants.
An automation simulation testbed
NASA Technical Reports Server (NTRS)
Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.; Mutammara, Atheel
1988-01-01
The work being done in porting ROBOSIM (a graphical simulation system developed jointly by NASA-MSFC and Vanderbilt University) to the HP350SRX graphics workstation is described. New additional ROBOSIM features, like collision detection and new kinematics simulation methods are also discussed. Based on the experiences of the work on ROBOSIM, a new graphics structural modeling environment is suggested which is intended to be a part of a new knowledge-based multiple aspect modeling testbed. The knowledge-based modeling methodologies and tools already available are described. Three case studies in the area of Space Station automation are also reported. First a geometrical structural model of the station is presented. This model was developed using the ROBOSIM package. Next the possible application areas of an integrated modeling environment in the testing of different Space Station operations are discussed. One of these possible application areas is the modeling of the Environmental Control and Life Support System (ECLSS), which is one of the most complex subsystems of the station. Using the multiple aspect modeling methodology, a fault propagation model of this system is being built and is described.
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Ridgeway, Greg; Morral, Andrew R.
2004-01-01
Causal effect modeling with naturalistic rather than experimental data is challenging. In observational studies participants in different treatment conditions may also differ on pretreatment characteristics that influence outcomes. Propensity score methods can theoretically eliminate these confounds for all observed covariates, but accurate…