Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation
Khaligh-Razavi, Seyed-Mahdi; Kriegeskorte, Nikolaus
2014-01-01
Inferior temporal (IT) cortex in human and nonhuman primates serves visual object recognition. Computational object-vision models, although continually improving, do not yet reach human performance. It is unclear to what extent the internal representations of computational models can explain the IT representation. Here we investigate a wide range of computational model representations (37 in total), testing their categorization performance and their ability to account for the IT representational geometry. The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities. Representational geometries were significantly correlated between IT and many of the models. However, the categorical clustering observed in IT was largely unexplained by the unsupervised models. The deep convolutional network, which was trained by supervision with over a million category-labeled images, reached the highest categorization performance and also best explained IT, although it did not fully explain the IT data. Combining the features of this model with appropriate weights and adding linear combinations that maximize the margin between animate and inanimate objects and between faces and other objects yielded a representation that fully explained our IT data. Overall, our results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT. PMID:25375136
Representing, Running, and Revising Mental Models: A Computational Model
ERIC Educational Resources Information Center
Friedman, Scott; Forbus, Kenneth; Sherin, Bruce
2018-01-01
People use commonsense science knowledge to flexibly explain, predict, and manipulate the world around them, yet we lack computational models of how this commonsense science knowledge is represented, acquired, utilized, and revised. This is an important challenge for cognitive science: Building higher order computational models in this area will…
Computational Constraints in Cognitive Theories of Forgetting
Ecker, Ullrich K. H.; Lewandowsky, Stephan
2012-01-01
This article highlights some of the benefits of computational modeling for theorizing in cognition. We demonstrate how computational models have been used recently to argue that (1) forgetting in short-term memory is based on interference not decay, (2) forgetting in list-learning paradigms is more parsimoniously explained by a temporal distinctiveness account than by various forms of consolidation, and (3) intrusion asymmetries that appear when information is learned in different contexts can be explained by temporal context reinstatement rather than labilization and reconsolidation processes. PMID:23091467
Computational constraints in cognitive theories of forgetting.
Ecker, Ullrich K H; Lewandowsky, Stephan
2012-01-01
This article highlights some of the benefits of computational modeling for theorizing in cognition. We demonstrate how computational models have been used recently to argue that (1) forgetting in short-term memory is based on interference not decay, (2) forgetting in list-learning paradigms is more parsimoniously explained by a temporal distinctiveness account than by various forms of consolidation, and (3) intrusion asymmetries that appear when information is learned in different contexts can be explained by temporal context reinstatement rather than labilization and reconsolidation processes.
Combining Feature Selection and Integration—A Neural Model for MT Motion Selectivity
Beck, Cornelia; Neumann, Heiko
2011-01-01
Background The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance We propose a new neural model for MT pattern computation and motion disambiguation that is based on a combination of feature selection and integration. The model can explain a range of recent neurophysiological findings including temporally dynamic behaviour. PMID:21814543
Let Documents Talk to Each Other: A Computer Model for Connection of Short Documents.
ERIC Educational Resources Information Center
Chen, Z.
1993-01-01
Discusses the integration of scientific texts through the connection of documents and describes a computer model that can connect short documents. Information retrieval and artificial intelligence are discussed; a prototype system of the model is explained; and the model is compared to other computer models. (17 references) (LRW)
Cognitive Model Exploration and Optimization: A New Challenge for Computational Science
2010-03-01
the generation and analysis of computational cognitive models to explain various aspects of cognition. Typically the behavior of these models...computational scale of a workstation, so we have turned to high performance computing (HPC) clusters and volunteer computing for large-scale...computational resources. The majority of applications on the Department of Defense HPC clusters focus on solving partial differential equations (Post
An analysis of the viscous flow through a compact radial turbine by the average passage approach
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Beach, Timothy A.
1990-01-01
A steady, three-dimensional viscous average passage computer code is used to analyze the flow through a compact radial turbine rotor. The code models the flow as spatially periodic from blade passage to blade passage. Results from the code using varying computational models are compared with each other and with experimental data. These results include blade surface velocities and pressures, exit vorticity and entropy contour plots, shroud pressures, and spanwise exit total temperature, total pressure, and swirl distributions. The three computational models used are inviscid, viscous with no blade clearance, and viscous with blade clearance. It is found that modeling viscous effects improves correlation with experimental data, while modeling hub and tip clearances further improves some comparisons. Experimental results such as a local maximum of exit swirl, reduced exit total pressures at the walls, and exit total temperature magnitudes are explained by interpretation of the flow physics and computed secondary flows. Trends in the computed blade loading diagrams are similarly explained.
A common stochastic accumulator with effector-dependent noise can explain eye-hand coordination
Gopal, Atul; Viswanathan, Pooja
2015-01-01
The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning. PMID:25568161
Cognitive Model Exploration and Optimization: A New Challenge for Computational Science
2010-01-01
Introduction Research in cognitive science often involves the generation and analysis of computational cognitive models to explain various...HPC) clusters and volunteer computing for large-scale computational resources. The majority of applications on the Department of Defense HPC... clusters focus on solving partial differential equations (Post, 2009). These tend to be lean, fast models with little noise. While we lack specific
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Kiper, Pawel; Szczudlik, Andrzej; Venneri, Annalena; Stozek, Joanna; Luque-Moreno, Carlos; Opara, Jozef; Baba, Alfonc; Agostini, Michela; Turolla, Andrea
2016-10-15
Computational approaches for modelling the central nervous system (CNS) aim to develop theories on processes occurring in the brain that allow the transformation of all information needed for the execution of motor acts. Computational models have been proposed in several fields, to interpret not only the CNS functioning, but also its efferent behaviour. Computational model theories can provide insights into neuromuscular and brain function allowing us to reach a deeper understanding of neuroplasticity. Neuroplasticity is the process occurring in the CNS that is able to permanently change both structure and function due to interaction with the external environment. To understand such a complex process several paradigms related to motor learning and computational modeling have been put forward. These paradigms have been explained through several internal model concepts, and supported by neurophysiological and neuroimaging studies. Therefore, it has been possible to make theories about the basis of different learning paradigms according to known computational models. Here we review the computational models and motor learning paradigms used to describe the CNS and neuromuscular functions, as well as their role in the recovery process. These theories have the potential to provide a way to rigorously explain all the potential of CNS learning, providing a basis for future clinical studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Viejo, Guillaume; Khamassi, Mehdi; Brovelli, Andrea; Girard, Benoît
2015-01-01
Current learning theory provides a comprehensive description of how humans and other animals learn, and places behavioral flexibility and automaticity at heart of adaptive behaviors. However, the computations supporting the interactions between goal-directed and habitual decision-making systems are still poorly understood. Previous functional magnetic resonance imaging (fMRI) results suggest that the brain hosts complementary computations that may differentially support goal-directed and habitual processes in the form of a dynamical interplay rather than a serial recruitment of strategies. To better elucidate the computations underlying flexible behavior, we develop a dual-system computational model that can predict both performance (i.e., participants' choices) and modulations in reaction times during learning of a stimulus–response association task. The habitual system is modeled with a simple Q-Learning algorithm (QL). For the goal-directed system, we propose a new Bayesian Working Memory (BWM) model that searches for information in the history of previous trials in order to minimize Shannon entropy. We propose a model for QL and BWM coordination such that the expensive memory manipulation is under control of, among others, the level of convergence of the habitual learning. We test the ability of QL or BWM alone to explain human behavior, and compare them with the performance of model combinations, to highlight the need for such combinations to explain behavior. Two of the tested combination models are derived from the literature, and the latter being our new proposal. In conclusion, all subjects were better explained by model combinations, and the majority of them are explained by our new coordination proposal. PMID:26379518
Boundary Condition for Modeling Semiconductor Nanostructures
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Oyafuso, Fabiano; von Allmen, Paul; Klimeck, Gerhard
2006-01-01
A recently proposed boundary condition for atomistic computational modeling of semiconductor nanostructures (particularly, quantum dots) is an improved alternative to two prior such boundary conditions. As explained, this boundary condition helps to reduce the amount of computation while maintaining accuracy.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Modeling Education on the Real World.
ERIC Educational Resources Information Center
Hunter, Beverly
1983-01-01
Discusses educational applications of computer simulation and model building for grades K to 8, with emphasis on the usefulness of the computer simulation language, micro-DYNAMO, for programing and understanding the models which help to explain social and natural phenomena. A new textbook for junior-senior high school students is noted. (EAO)
Modeling of a latent fault detector in a digital system
NASA Technical Reports Server (NTRS)
Nagel, P. M.
1978-01-01
Methods of modeling the detection time or latency period of a hardware fault in a digital system are proposed that explain how a computer detects faults in a computational mode. The objectives were to study how software reacts to a fault, to account for as many variables as possible affecting detection and to forecast a given program's detecting ability prior to computation. A series of experiments were conducted on a small emulated microprocessor with fault injection capability. Results indicate that the detecting capability of a program largely depends on the instruction subset used during computation and the frequency of its use and has little direct dependence on such variables as fault mode, number set, degree of branching and program length. A model is discussed which employs an analog with balls in an urn to explain the rate of which subsequent repetitions of an instruction or instruction set detect a given fault.
Human Modeling for Ground Processing Human Factors Engineering Analysis
NASA Technical Reports Server (NTRS)
Stambolian, Damon B.; Lawrence, Brad A.; Stelges, Katrine S.; Steady, Marie-Jeanne O.; Ridgwell, Lora C.; Mills, Robert E.; Henderson, Gena; Tran, Donald; Barth, Tim
2011-01-01
There have been many advancements and accomplishments over the last few years using human modeling for human factors engineering analysis for design of spacecraft. The key methods used for this are motion capture and computer generated human models. The focus of this paper is to explain the human modeling currently used at Kennedy Space Center (KSC), and to explain the future plans for human modeling for future spacecraft designs
Vassena, Eliana; Deraeve, James; Alexander, William H
2017-10-01
Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.
USING COMPUTER MODELS TO DETERMINE THE EFFECT OF STORAGE ON WATER QUALITY
Studies have indicated that water quality is degraded as a result of long residence times in storage tanks, highlighting the importance of tank design, location, and operation. Computer models, developed to explain some of the mixing and distrribution issues associated with tank...
A Computer Model of the Cardiovascular System for Effective Learning.
ERIC Educational Resources Information Center
Rothe, Carl F.
1979-01-01
Described is a physiological model which solves a set of interacting, possibly nonlinear, differential equations through numerical integration on a digital computer. Sample printouts are supplied and explained for effects on the components of a cardiovascular system when exercise, hemorrhage, and cardiac failure occur. (CS)
Practice Makes Perfect: Using a Computer-Based Business Simulation in Entrepreneurship Education
ERIC Educational Resources Information Center
Armer, Gina R. M.
2011-01-01
This article explains the use of a specific computer-based simulation program as a successful experiential learning model and as a way to increase student motivation while augmenting conventional methods of business instruction. This model is based on established adult learning principles.
ERIC Educational Resources Information Center
Freudenthal, Daniel: Pine, Julian; Gobet, Fernando
2010-01-01
In this study, we use corpus analysis and computational modelling techniques to compare two recent accounts of the OI stage: Legate & Yang's (2007) Variational Learning Model and Freudenthal, Pine & Gobet's (2006) Model of Syntax Acquisition in Children. We first assess the extent to which each of these accounts can explain the level of OI errors…
Cultural Commonalities and Differences in Spatial Problem-Solving: A Computational Analysis
ERIC Educational Resources Information Center
Lovett, Andrew; Forbus, Kenneth
2011-01-01
A fundamental question in human cognition is how people reason about space. We use a computational model to explore cross-cultural commonalities and differences in spatial cognition. Our model is based upon two hypotheses: (1) the structure-mapping model of analogy can explain the visual comparisons used in spatial reasoning; and (2) qualitative,…
Computer-Based Tutoring of Visual Concepts: From Novice to Experts.
ERIC Educational Resources Information Center
Sharples, Mike
1991-01-01
Description of ways in which computers might be used to teach visual concepts discusses hypermedia systems; describes computer-generated tutorials; explains the use of computers to create learning aids such as concept maps, feature spaces, and structural models; and gives examples of visual concept teaching in medical education. (10 references)…
A Four-Stage Model for Planning Computer-Based Instruction.
ERIC Educational Resources Information Center
Morrison, Gary R.; Ross, Steven M.
1988-01-01
Describes a flexible planning process for developing computer based instruction (CBI) in which the CBI design is implemented on paper between the lesson design and the program production. A four-stage model is explained, including (1) an initial flowchart, (2) storyboards, (3) a detailed flowchart, and (4) an evaluation. (16 references)…
Using CO5BOLD models to predict the effects of granulation on colours .
NASA Astrophysics Data System (ADS)
Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Prakapavičius, D.; Kučinskas, A.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.
In order to investigate the effects of granulation on fluxes and colours, we computed the emerging fluxes from the models in the CO5BOLD grid with metallicities [M/H]=0.0,-1.0,-2.0 and -3.0. These fluxes have been used to compute colours in different photometric systems. We explain here how our computations have been performed and provide some results.
A model for diagnosing and explaining multiple disorders.
Jamieson, P W
1991-08-01
The ability to diagnose multiple interacting disorders and explain them in a coherent causal framework has only partially been achieved in medical expert systems. This paper proposes a causal model for diagnosing and explaining multiple disorders whose key elements are: physician-directed hypotheses generation, object-oriented knowledge representation, and novel explanation heuristics. The heuristics modify and link the explanations to make the physician aware of diagnostic complexities. A computer program incorporating the model currently is in use for diagnosing peripheral nerve and muscle disorders. The program successfully diagnoses and explains interactions between diseases in terms of underlying pathophysiologic concepts. The model offers a new architecture for medical domains where reasoning from first principles is difficult but explanation of disease interactions is crucial for the system's operation.
A Parametric Computational Analysis into Galvanic Coupling Intrabody Communication.
Callejon, M Amparo; Del Campo, P; Reina-Tosina, Javier; Roa, Laura M
2017-08-02
Intrabody Communication (IBC) uses the human body tissues as transmission media for electrical signals to interconnect personal health devices in wireless body area networks. The main goal of this work is to conduct a computational analysis covering some bioelectric issues that still have not been fully explained, such as the modeling of the skin-electrode impedance, the differences associated to the use of constant voltage or current excitation modes, or the influence on attenuation of the subject's anthropometrical and bioelectric properties. With this aim, a computational finite element model has been developed, allowing the IBC channel attenuation as well as the electric field and current density through arm tissues to be computed as a function of these parameters. As a conclusion, this parametric analysis has in turn permitted us to disclose some knowledge about the causes and effects of the above-mentioned issues, thus explaining and complementing previous results reported in the literature.
1986-03-01
Aimpoints 22 Overviev 22 Random Movement of the RML 23 Computing Burst Locations and the HMIL’s Final Location 23 Selecting the HIMLs Speed. 29...described threat. The actual model used in this study is an MEASIC computer program . written and run on an Apple Macintosh computer . It is described in...mechanics of the computer program that models the warheads’ flight time sequence, it will be helpful to explain some of the elements of the sequence
Modeling Reality - How Computers Mirror Life
NASA Astrophysics Data System (ADS)
Bialynicki-Birula, Iwo; Bialynicka-Birula, Iwona
2005-01-01
The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure.
A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes
ERIC Educational Resources Information Center
Browning, N. Andrew; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard…
Bayesian models: A statistical primer for ecologists
Hobbs, N. Thompson; Hooten, Mevin B.
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models
Mathematical neuroscience: from neurons to circuits to systems.
Gutkin, Boris; Pinto, David; Ermentrout, Bard
2003-01-01
Applications of mathematics and computational techniques to our understanding of neuronal systems are provided. Reduction of membrane models to simplified canonical models demonstrates how neuronal spike-time statistics follow from simple properties of neurons. Averaging over space allows one to derive a simple model for the whisker barrel circuit and use this to explain and suggest several experiments. Spatio-temporal pattern formation methods are applied to explain the patterns seen in the early stages of drug-induced visual hallucinations.
NASA Technical Reports Server (NTRS)
Gibson, A. F.
1983-01-01
A system of computer programs has been developed to model general three-dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinate to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface intersection curves. Internal details of the implementation of this system are explained, and maintenance procedures are specified.
A Simple Explanation of Complexation
ERIC Educational Resources Information Center
Elliott, J. Richard
2010-01-01
The topics of solution thermodynamics, activity coefficients, and complex formation are introduced through computational exercises and sample applications. The presentation is designed to be accessible to freshmen in a chemical engineering computations course. The MOSCED model is simplified to explain complex formation in terms of hydrogen…
Computational models for predicting interactions with membrane transporters.
Xu, Y; Shen, Q; Liu, X; Lu, J; Li, S; Luo, C; Gong, L; Luo, X; Zheng, M; Jiang, H
2013-01-01
Membrane transporters, including two members: ATP-binding cassette (ABC) transporters and solute carrier (SLC) transporters are proteins that play important roles to facilitate molecules into and out of cells. Consequently, these transporters can be major determinants of the therapeutic efficacy, toxicity and pharmacokinetics of a variety of drugs. Considering the time and expense of bio-experiments taking, research should be driven by evaluation of efficacy and safety. Computational methods arise to be a complementary choice. In this article, we provide an overview of the contribution that computational methods made in transporters field in the past decades. At the beginning, we present a brief introduction about the structure and function of major members of two families in transporters. In the second part, we focus on widely used computational methods in different aspects of transporters research. In the absence of a high-resolution structure of most of transporters, homology modeling is a useful tool to interpret experimental data and potentially guide experimental studies. We summarize reported homology modeling in this review. Researches in computational methods cover major members of transporters and a variety of topics including the classification of substrates and/or inhibitors, prediction of protein-ligand interactions, constitution of binding pocket, phenotype of non-synonymous single-nucleotide polymorphisms, and the conformation analysis that try to explain the mechanism of action. As an example, one of the most important transporters P-gp is elaborated to explain the differences and advantages of various computational models. In the third part, the challenges of developing computational methods to get reliable prediction, as well as the potential future directions in transporter related modeling are discussed.
Mental health assessment: Inference, explanation, and coherence.
Thagard, Paul; Larocque, Laurette
2018-06-01
Mental health professionals such as psychiatrists and psychotherapists assess their patients by identifying disorders that explain their symptoms. This assessment requires an inference to the best explanation that compares different disorders with respect to how well they explain the available evidence. Such comparisons are captured by the theory of explanatory coherence that states 7 principles for evaluating competing hypotheses in the light of evidence. The computational model ECHO shows how explanatory coherence can be efficiently computed. We show the applicability of explanatory coherence to mental health assessment by modelling a case of psychiatric interviewing and a case of psychotherapeutic evaluation. We argue that this approach is more plausible than Bayesian inference and hermeneutic interpretation. © 2018 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Denenberg, Ray
1985-01-01
Discusses the need for standards allowing computer-to-computer communication and gives examples of technical issues. The seven-layer framework of the Open Systems Interconnection (OSI) Reference Model is explained and illustrated. Sidebars feature public data networks and Recommendation X.25, OSI standards, OSI layer functions, and a glossary.…
From Turing machines to computer viruses.
Marion, Jean-Yves
2012-07-28
Self-replication is one of the fundamental aspects of computing where a program or a system may duplicate, evolve and mutate. Our point of view is that Kleene's (second) recursion theorem is essential to understand self-replication mechanisms. An interesting example of self-replication codes is given by computer viruses. This was initially explained in the seminal works of Cohen and of Adleman in the 1980s. In fact, the different variants of recursion theorems provide and explain constructions of self-replicating codes and, as a result, of various classes of malware. None of the results are new from the point of view of computability theory. We now propose a self-modifying register machine as a model of computation in which we can effectively deal with the self-reproduction and in which new offsprings can be activated as independent organisms.
ERIC Educational Resources Information Center
Feinberg, William E.
1988-01-01
This article describes a monte carlo computer simulation of affirmative action employment policies. The counterintuitive results of the model are explained through a thought device involving urns and marbles. States that such model simulations have implications for social policy. (BSR)
ERIC Educational Resources Information Center
Kucukozer, Huseyin; Korkusuz, M. Emin; Kucukozer, H. Asuman; Yurumezoglu, Kemal
2009-01-01
This study has examined the impact of teaching certain basic concepts of astronomy through a predict-observe-explain strategy, which includes three-dimensional (3D) computer modeling and observations on conceptual changes seen in sixth-grade elementary school children (aged 11-13; number of students: 131). A pre- and postastronomy instruction…
The Effects of 3D Computer Modelling on Conceptual Change about Seasons and Phases of the Moon
ERIC Educational Resources Information Center
Kucukozer, Huseyin
2008-01-01
In this study, prospective science teachers' misconceptions about the seasons and the phases of the Moon were determined, and then the effects of 3D computer modelling on their conceptual changes were investigated. The topics were covered in two classes with a total of 76 students using a predict-observe-explain strategy supported by 3D computer…
A Primer on Simulation and Gaming.
ERIC Educational Resources Information Center
Barton, Richard F.
In a primer intended for the administrative professions, for the behavioral sciences, and for education, simulation and its various aspects are defined, illustrated, and explained. Man-model simulation, man-computer simulation, all-computer simulation, and analysis are discussed as techniques for studying object systems (parts of the "real…
ERIC Educational Resources Information Center
Teo, Timothy
2010-01-01
Purpose: The purpose of this paper is to examine the effect of gender on pre-service teachers' computer attitudes. Design/methodology/approach: A total of 157 pre-service teachers completed a survey questionnaire measuring their responses to four constructs which explain computer attitude. These were administered during the teaching term where…
Explaining the DAMPE e+e- excess using the Higgs triplet model with a vector dark matter
NASA Astrophysics Data System (ADS)
Chen, Chuan-Hung; Chiang, Cheng-Wei; Nomura, Takaaki
2018-03-01
We explain the e+e- excess observed by the DAMPE Collaboration using a dark matter model based upon the Higgs triplet model and an additional hidden S U (2 )X gauge symmetry. Two of the S U (2 )X gauge bosons are stable due to a residual discrete symmetry and serve as the dark matter candidate. We search the parameter space for regions that can explain the observed relic abundance, and compute the flux of e+e- coming from a nearby dark matter subhalo. With the inclusion of background cosmic rays, we show that the model can render a good fit to the entire energy spectrum covering the AMS-02, Fermi-LAT, CALET and DAMPE data.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
A conceptual and computational model of moral decision making in human and artificial agents.
Wallach, Wendell; Franklin, Stan; Allen, Colin
2010-07-01
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent's selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. Copyright © 2010 Cognitive Science Society, Inc.
Fundamentals and Recent Developments in Approximate Bayesian Computation
Lintusaari, Jarno; Gutmann, Michael U.; Dutta, Ritabrata; Kaski, Samuel; Corander, Jukka
2017-01-01
Abstract Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.] PMID:28175922
Tawhai, Merryn H.; Clark, Alys R.; Burrowes, Kelly S.
2011-01-01
Biophysically-based computational models provide a tool for integrating and explaining experimental data, observations, and hypotheses. Computational models of the pulmonary circulation have evolved from minimal and efficient constructs that have been used to study individual mechanisms that contribute to lung perfusion, to sophisticated multi-scale and -physics structure-based models that predict integrated structure-function relationships within a heterogeneous organ. This review considers the utility of computational models in providing new insights into the function of the pulmonary circulation, and their application in clinically motivated studies. We review mathematical and computational models of the pulmonary circulation based on their application; we begin with models that seek to answer questions in basic science and physiology and progress to models that aim to have clinical application. In looking forward, we discuss the relative merits and clinical relevance of computational models: what important features are still lacking; and how these models may ultimately be applied to further increasing our understanding of the mechanisms occurring in disease of the pulmonary circulation. PMID:22034608
Why are some STEM fields more gender balanced than others?
Cheryan, Sapna; Ziegler, Sianna A; Montoya, Amanda K; Jiang, Lily
2017-01-01
Women obtain more than half of U.S. undergraduate degrees in biology, chemistry, and mathematics, yet they earn less than 20% of computer science, engineering, and physics undergraduate degrees (National Science Foundation, 2014a). Gender differences in interest in computer science, engineering, and physics appear even before college. Why are women represented in some science, technology, engineering, and mathematics (STEM) fields more than others? We conduct a critical review of the most commonly cited factors explaining gender disparities in STEM participation and investigate whether these factors explain differential gender participation across STEM fields. Math performance and discrimination influence who enters STEM, but there is little evidence to date that these factors explain why women's underrepresentation is relatively worse in some STEM fields. We introduce a model with three overarching factors to explain the larger gender gaps in participation in computer science, engineering, and physics than in biology, chemistry, and mathematics: (a) masculine cultures that signal a lower sense of belonging to women than men, (b) a lack of sufficient early experience with computer science, engineering, and physics, and (c) gender gaps in self-efficacy. Efforts to increase women's participation in computer science, engineering, and physics may benefit from changing masculine cultures and providing students with early experiences that signal equally to both girls and boys that they belong and can succeed in these fields. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A Bayesian Attractor Model for Perceptual Decision Making
Bitzer, Sebastian; Bruineberg, Jelle; Kiebel, Stefan J.
2015-01-01
Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks. PMID:26267143
The Child as Econometrician: A Rational Model of Preference Understanding in Children
Lucas, Christopher G.; Griffiths, Thomas L.; Xu, Fei; Fawcett, Christine; Gopnik, Alison; Kushnir, Tamar; Markson, Lori; Hu, Jane
2014-01-01
Recent work has shown that young children can learn about preferences by observing the choices and emotional reactions of other people, but there is no unified account of how this learning occurs. We show that a rational model, built on ideas from economics and computer science, explains the behavior of children in several experiments, and offers new predictions as well. First, we demonstrate that when children use statistical information to learn about preferences, their inferences match the predictions of a simple econometric model. Next, we show that this same model can explain children's ability to learn that other people have preferences similar to or different from their own and use that knowledge to reason about the desirability of hidden objects. Finally, we use the model to explain a developmental shift in preference understanding. PMID:24667309
The child as econometrician: a rational model of preference understanding in children.
Lucas, Christopher G; Griffiths, Thomas L; Xu, Fei; Fawcett, Christine; Gopnik, Alison; Kushnir, Tamar; Markson, Lori; Hu, Jane
2014-01-01
Recent work has shown that young children can learn about preferences by observing the choices and emotional reactions of other people, but there is no unified account of how this learning occurs. We show that a rational model, built on ideas from economics and computer science, explains the behavior of children in several experiments, and offers new predictions as well. First, we demonstrate that when children use statistical information to learn about preferences, their inferences match the predictions of a simple econometric model. Next, we show that this same model can explain children's ability to learn that other people have preferences similar to or different from their own and use that knowledge to reason about the desirability of hidden objects. Finally, we use the model to explain a developmental shift in preference understanding.
ERIC Educational Resources Information Center
Jiang, L. Crystal; Bazarova, Natalie N.; Hancock, Jeffrey T.
2011-01-01
The present research investigated whether the attribution process through which people explain self-disclosures differs in text-based computer-mediated interactions versus face to face, and whether differences in causal attributions account for the increased intimacy frequently observed in mediated communication. In the experiment participants…
Combining-Ability Determinations for Incomplete Mating Designs
E.B. Snyder
1975-01-01
It is shown how general combining ability values (GCA's) from cross-, open-, and self-pollinated progeny can be derived in a single analysis. Breeding values are employed to facilitate explaining genetic models of the expected family means and the derivation of the GCA's. A FORTRAN computer program also includes computation of specific combining ability...
Modeling Cross-Situational Word-Referent Learning: Prior Questions
ERIC Educational Resources Information Center
Yu, Chen; Smith, Linda B.
2012-01-01
Both adults and young children possess powerful statistical computation capabilities--they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of…
A Computer Model for Soda Bottle Oscillations: "The Bottelator".
ERIC Educational Resources Information Center
Soltzberg, Leonard J.; And Others
1997-01-01
Presents a model to explain the behavior of oscillatory phenomena found in the soda bottle oscillator. Describes recording the oscillations, and the design of the model based on the qualitative explanation of the oscillations. Illustrates a variety of physiochemical concepts including far-from-equilibrium oscillations, feedback, solubility and…
A feedback model of figure-ground assignment.
Domijan, Drazen; Setić, Mia
2008-05-30
A computational model is proposed in order to explain how bottom-up and top-down signals are combined into a unified perception of figure and background. The model is based on the interaction between the ventral and the dorsal stream. The dorsal stream computes saliency based on boundary signals provided by the simple and the complex cortical cells. Output from the dorsal stream is projected to the surface network which serves as a blackboard on which the surface representation is formed. The surface network is a recurrent network which segregates different surfaces by assigning different firing rates to them. The figure is labeled by the maximal firing rate. Computer simulations showed that the model correctly assigns figural status to the surface with a smaller size, a greater contrast, convexity, surroundedness, horizontal-vertical orientation and a higher spatial frequency content. The simple gradient of activity in the dorsal stream enables the simulation of the new principles of the lower region and the top-bottom polarity. The model also explains how the exogenous attention and the endogenous attention may reverse the figural assignment. Due to the local excitation in the surface network, neural activity at the cued region will spread over the whole surface representation. Therefore, the model implements the object-based attentional selection.
Reconstructing constructivism: causal models, Bayesian learning mechanisms, and the theory theory.
Gopnik, Alison; Wellman, Henry M
2012-11-01
We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and nontechnical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and the psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists.
Contemporary cybernetics and its facets of cognitive informatics and computational intelligence.
Wang, Yingxu; Kinsner, Witold; Zhang, Du
2009-08-01
This paper explores the architecture, theoretical foundations, and paradigms of contemporary cybernetics from perspectives of cognitive informatics (CI) and computational intelligence. The modern domain and the hierarchical behavioral model of cybernetics are elaborated at the imperative, autonomic, and cognitive layers. The CI facet of cybernetics is presented, which explains how the brain may be mimicked in cybernetics via CI and neural informatics. The computational intelligence facet is described with a generic intelligence model of cybernetics. The compatibility between natural and cybernetic intelligence is analyzed. A coherent framework of contemporary cybernetics is presented toward the development of transdisciplinary theories and applications in cybernetics, CI, and computational intelligence.
From Occasional Choices to Inevitable Musts: A Computational Model of Nicotine Addiction
Metin, Selin; Sengor, N. Serap
2012-01-01
Although, there are considerable works on the neural mechanisms of reward-based learning and decision making, and most of them mention that addiction can be explained by malfunctioning in these cognitive processes, there are very few computational models. This paper focuses on nicotine addiction, and a computational model for nicotine addiction is proposed based on the neurophysiological basis of addiction. The model compromises different levels ranging from molecular basis to systems level, and it demonstrates three different possible behavioral patterns which are addict, nonaddict, and indecisive. The dynamical behavior of the proposed model is investigated with tools used in analyzing nonlinear dynamical systems, and the relation between the behavioral patterns and the dynamics of the system is discussed. PMID:23251144
How Captain Amerika uses neural networks to fight crime
NASA Technical Reports Server (NTRS)
Rogers, Steven K.; Kabrisky, Matthew; Ruck, Dennis W.; Oxley, Mark E.
1994-01-01
Artificial neural network models can make amazing computations. These models are explained along with their application in problems associated with fighting crime. Specific problems addressed are identification of people using face recognition, speaker identification, and fingerprint and handwriting analysis (biometric authentication).
NASA Astrophysics Data System (ADS)
Aiken, John; Schatz, Michael; Burk, John; Caballero, Marcos; Thoms, Brian
2012-03-01
We describe the assessment of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). The impact of teaching computation is evaluated through a proctored assignment that asks the students to complete a provided program to represent the correct motion. Using questions isomorphic to the Force Concept Inventory we gauge students understanding of force in relation to the simulation. The students are given an open ended essay question that asks them to explain the steps they would use to model a physical situation. We also investigate the attitudes and prior experiences of each student using the Computation Modeling in Physics Attitudinal Student Survey (COMPASS) developed at Georgia Tech as well as a prior computational experiences survey.
A Computer Simulation to Help in Teaching Induction Phenomena
ERIC Educational Resources Information Center
Mihas, Pavlos
2003-01-01
The motion of a magnet through a coil is analysed through a model of magnetic monopoles. The magnetic flux of a monopole passing through a loop is explained and also its rate of change. By a superposition of voltages produced by the monopoles on the coils the shape of the voltage versus time graph is explained. Also examined is the interaction of…
ERIC Educational Resources Information Center
Subiaul, Francys; Zimmermann, Laura; Renner, Elizabeth; Schilder, Brian; Barr, Rachel
2016-01-01
During the first 5 years of life, the versatility, breadth, and fidelity with which children imitate change dramatically. Currently, there is no model to explain what underlies such significant changes. To that end, the present study examined whether task-independent but domain-specific--elemental--imitation mechanism explains performance across…
Cloud Computing Value Chains: Understanding Businesses and Value Creation in the Cloud
NASA Astrophysics Data System (ADS)
Mohammed, Ashraf Bany; Altmann, Jörn; Hwang, Junseok
Based on the promising developments in Cloud Computing technologies in recent years, commercial computing resource services (e.g. Amazon EC2) or software-as-a-service offerings (e.g. Salesforce. com) came into existence. However, the relatively weak business exploitation, participation, and adoption of other Cloud Computing services remain the main challenges. The vague value structures seem to be hindering business adoption and the creation of sustainable business models around its technology. Using an extensive analyze of existing Cloud business models, Cloud services, stakeholder relations, market configurations and value structures, this Chapter develops a reference model for value chains in the Cloud. Although this model is theoretically based on porter's value chain theory, the proposed Cloud value chain model is upgraded to fit the diversity of business service scenarios in the Cloud computing markets. Using this model, different service scenarios are explained. Our findings suggest new services, business opportunities, and policy practices for realizing more adoption and value creation paths in the Cloud.
Multi-keV x-ray sources from metal-lined cylindrical hohlraums
NASA Astrophysics Data System (ADS)
Jacquet, L.; Girard, F.; Primout, M.; Villette, B.; Stemmler, Ph.
2012-08-01
As multi-keV x-ray sources, plastic hohlraums with inner walls coated with titanium, copper, and germanium have been fired on Omega in September 2009. For all the targets, the measured and calculated multi-keV x-ray power time histories are in a good qualitative agreement. In the same irradiation conditions, measured multi-keV x-ray conversion rates are ˜6%-8% for titanium, ˜2% for copper, and ˜0.5% for germanium. For titanium and copper hohlraums, the measured conversion rates are about two times higher than those given by hydroradiative computations. Conversely, for the germanium hohlraum, a rather good agreement is found between measured and computed conversion rates. To explain these findings, multi-keV integrated emissivities calculated with RADIOM [M. Busquet, Phys. Fluids 85, 4191 (1993)], the nonlocal-thermal-equilibrium atomic physics model used in our computations, have been compared to emissivities obtained from different other models. These comparisons provide an attractive way to explain the discrepancies between experimental and calculated quantitative results.
A Symbolic Model of the Nonconscious Acquisition of Information.
ERIC Educational Resources Information Center
Ling, Charles X.; Marinov, Marin
1994-01-01
Challenges Smolensky's theory that human intuitive/nonconscious cognitive processes can only be accurately explained in terms of subsymbolic computations in artificial neural networks. Symbolic learning models of two cognitive tasks involving nonconscious acquisition of information are presented: learning production rules and artificial finite…
Modeling the non-grey-body thermal emission from the full moon
NASA Technical Reports Server (NTRS)
Vogler, Karl J.; Johnson, Paul E.; Shorthill, Richard W.
1991-01-01
The present series of thermophysical computer models for solid-surfaced planetary bodies whose surface roughness is modeled as paraboloidal craters of specified depth/diameter ratio attempts to characterize the nongrey-body brightness temperature spectra of the moon and of the Galilean satellites. This modeling, in which nondiffuse radiation properties and surface roughness are included for rigorous analysis of scattered and reemitted radiation within a crater, explains to first order the behavior of both limb-scans and disk-integrated IR brightness temperature spectra for the full moon. Only negative surface relief can explain lunar thermal emissions' deviation from smooth Lambert-surface expectations.
The Use of a Relational Database in Qualitative Research on Educational Computing.
ERIC Educational Resources Information Center
Winer, Laura R.; Carriere, Mario
1990-01-01
Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…
Automatic Generation of Just-in-Time Online Assessments from Software Design Models
ERIC Educational Resources Information Center
Zualkernan, Imran A.; El-Naaj, Salim Abou; Papadopoulos, Maria; Al-Amoudi, Budoor K.; Matthews, Charles E.
2009-01-01
Computer software is pervasive in today's society. The rate at which new versions of computer software products are released is phenomenal when compared to the release rate of new products in traditional industries such as aircraft building. This rapid rate of change can partially explain why most certifications in the software industry are…
Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory Theory
ERIC Educational Resources Information Center
Gopnik, Alison; Wellman, Henry M.
2012-01-01
We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework…
A Schema Theory Account of Some Cognitive Processes in Complex Learning. Technical Report No. 81.
ERIC Educational Resources Information Center
Munro, Allen; Rigney, Joseph W.
Procedural semantics models have diminished the distinction between data structures and procedures in computer simulations of human intelligence. This development has theoretical consequences for models of cognition. One type of procedural semantics model, called schema theory, is presented, and a variety of cognitive processes are explained in…
Context, Cortex, and Dopamine: A Connectionist Approach to Behavior and Biology in Schizophrenia.
ERIC Educational Resources Information Center
Cohen, Jonathan D.; Servan-Schreiber, David
1992-01-01
Using a connectionist framework, it is possible to develop models exploring effects of biologically relevant variables on behavior. The ability of such models to explain schizophrenic behavior in terms of biological disturbances is considered, and computer models are presented that simulate normal and schizophrenic behavior in an attentional task.…
A Model for Critical Games Literacy
ERIC Educational Resources Information Center
Apperley, Tom; Beavis, Catherine
2013-01-01
This article outlines a model for teaching both computer games and videogames in the classroom for teachers. The model illustrates the connections between in-game actions and youth gaming culture. The article explains how the out-of-school knowledge building, creation and collaboration that occurs in gaming and gaming culture has an impact on…
Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.
Vassena, Eliana; Holroyd, Clay B; Alexander, William H
2017-01-01
In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.
Jackson, M E; Gnadt, J W
1999-03-01
The object-oriented graphical programming language LabView was used to implement the numerical solution to a computational model of saccade generation in primates. The computational model simulates the activity and connectivity of anatomical strictures known to be involved in saccadic eye movements. The LabView program provides a graphical user interface to the model that makes it easy to observe and modify the behavior of each element of the model. Essential elements of the source code of the LabView program are presented and explained. A copy of the model is available for download from the internet.
Samlan, Robin A.; Story, Brad H.; Bunton, Kate
2014-01-01
Purpose To determine 1) how specific vocal fold structural and vibratory features relate to breathy voice quality and 2) the relation of perceived breathiness to four acoustic correlates of breathiness. Method A computational, kinematic model of the vocal fold medial surfaces was used to specify features of vocal fold structure and vibration in a manner consistent with breathy voice. Four model parameters were altered: vocal process separation, surface bulging, vibratory nodal point, and epilaryngeal constriction. Twelve naïve listeners rated breathiness of 364 samples relative to a reference. The degree of breathiness was then compared to 1) the underlying kinematic profile and 2) four acoustic measures: cepstral peak prominence (CPP), harmonics-to-noise ratio, and two measures of spectral slope. Results Vocal process separation alone accounted for 61.4% of the variance in perceptual rating. Adding nodal point ratio and bulging to the equation increased the explained variance to 88.7%. The acoustic measure CPP accounted for 86.7% of the variance in perceived breathiness, and explained variance increased to 92.6% with the addition of one spectral slope measure. Conclusions Breathiness ratings were best explained kinematically by the degree of vocal process separation and acoustically by CPP. PMID:23785184
Quantum lattice model solver HΦ
NASA Astrophysics Data System (ADS)
Kawamura, Mitsuaki; Yoshimi, Kazuyoshi; Misawa, Takahiro; Yamaji, Youhei; Todo, Synge; Kawashima, Naoki
2017-08-01
HΦ [aitch-phi ] is a program package based on the Lanczos-type eigenvalue solution applicable to a broad range of quantum lattice models, i.e., arbitrary quantum lattice models with two-body interactions, including the Heisenberg model, the Kitaev model, the Hubbard model and the Kondo-lattice model. While it works well on PCs and PC-clusters, HΦ also runs efficiently on massively parallel computers, which considerably extends the tractable range of the system size. In addition, unlike most existing packages, HΦ supports finite-temperature calculations through the method of thermal pure quantum (TPQ) states. In this paper, we explain theoretical background and user-interface of HΦ. We also show the benchmark results of HΦ on supercomputers such as the K computer at RIKEN Advanced Institute for Computational Science (AICS) and SGI ICE XA (Sekirei) at the Institute for the Solid State Physics (ISSP).
The Inversion of Sensory Processing by Feedback Pathways: A Model of Visual Cognitive Functions.
ERIC Educational Resources Information Center
Harth, E.; And Others
1987-01-01
Explains the hierarchic structure of the mammalian visual system. Proposes a model in which feedback pathways serve to modify sensory stimuli in ways that enhance and complete sensory input patterns. Investigates the functioning of the system through computer simulations. (ML)
Venkataramani, PrasannaVenkhatesh; Gopal, Atul; Murthy, Aditya
2018-03-01
Although race models have been extensively used to study inhibitory control, the mechanisms that enable change of reach plans in the context of race models remain unexplored. We used a redirect task in which targets occasionally changed their locations to study the control of reaching movements during movement planning and execution phases. We tested nine different race model architectures that could explain the redirect behavior of reaching movements. We show that an independent GO-STOP-GO model that reflects a plan-abort-re-plan strategy involving non-interacting elements successfully explained the various behavioral measures such as the compensation function and the pattern of error response reaction times. By extending the same race model to the execution phase, we could explain the extent and the pattern of hypometric trials. Interestingly, the race model also provided evidence that redirecting a movement during planning and execution shared the same inhibitory mechanism. Taken together, this study demonstrates the applicability of an independent race model to understand the computational mechanisms underlying the control of reach movements. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Gepner, Ivan
2001-01-01
Explains the mechanism of producing dynamic computer pages which is based on three technologies: (1) the document object model; (2) cascading stylesheets; and (3) javascript. Discusses the applications of these techniques in genetics and developmental biology. (YDS)
Theoretical Investigation of oxides for batteries and fuel cell applications
NASA Astrophysics Data System (ADS)
Ganesh, Panchapakesan; Lubimtsev, Andrew A.; Balachandran, Janakiraman
I will present theoretical studies of Li-ion and proton-conducting oxides using a combination of theory and computations that involve Density Functional Theory based atomistic modeling, cluster-expansion based studies, global optimization, high-throughput computations and machine learning based investigation of ionic transport in oxide materials. In Li-ion intercalated oxides, we explain the experimentally observed (Nature Materials 12, 518-522 (2013)) 'intercalation pseudocapacitance' phenomenon, and explain why Nb2O5 is special to show this behavior when Li-ions are intercalated (J. Mater. Chem. A, 2013,1, 14951-14956), but not when Na-ions are used. In addition, we explore Li-ion intercalation theoretically in VO2 (B) phase, which is somewhat structurally similar to Nb2O5 and predict an interesting role of site-trapping on the voltage and capacity of the material, validated by ongoing experiments. Computations of proton conducting oxides explain why Y-doped BaZrO3 , one of the fastest proton conducting oxide, shows a decrease in conductivity above 20% Y-doping. Further, using high throughput computations and machine learning tools we discover general principles to improve proton conductivity. Acknowledgements: LDRD at ORNL and CNMS at ORNL
Explaining neural signals in human visual cortex with an associative learning model.
Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias
2012-08-01
"Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.
NASA Astrophysics Data System (ADS)
Neves, Rui Gomes; Teodoro, Vítor Duarte
2012-09-01
A teaching approach aiming at an epistemologically balanced integration of computational modelling in science and mathematics education is presented. The approach is based on interactive engagement learning activities built around computational modelling experiments that span the range of different kinds of modelling from explorative to expressive modelling. The activities are designed to make a progressive introduction to scientific computation without requiring prior development of a working knowledge of programming, generate and foster the resolution of cognitive conflicts in the understanding of scientific and mathematical concepts and promote performative competency in the manipulation of different and complementary representations of mathematical models. The activities are supported by interactive PDF documents which explain the fundamental concepts, methods and reasoning processes using text, images and embedded movies, and include free space for multimedia enriched student modelling reports and teacher feedback. To illustrate, an example from physics implemented in the Modellus environment and tested in undergraduate university general physics and biophysics courses is discussed.
A System Computational Model of Implicit Emotional Learning
Puviani, Luca; Rama, Sidita
2016-01-01
Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation. PMID:27378898
A System Computational Model of Implicit Emotional Learning.
Puviani, Luca; Rama, Sidita
2016-01-01
Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation.
A computational cognitive model of syntactic priming.
Reitter, David; Keller, Frank; Moore, Johanna D
2011-01-01
The psycholinguistic literature has identified two syntactic adaptation effects in language production: rapidly decaying short-term priming and long-lasting adaptation. To explain both effects, we present an ACT-R model of syntactic priming based on a wide-coverage, lexicalized syntactic theory that explains priming as facilitation of lexical access. In this model, two well-established ACT-R mechanisms, base-level learning and spreading activation, account for long-term adaptation and short-term priming, respectively. Our model simulates incremental language production and in a series of modeling studies, we show that it accounts for (a) the inverse frequency interaction; (b) the absence of a decay in long-term priming; and (c) the cumulativity of long-term adaptation. The model also explains the lexical boost effect and the fact that it only applies to short-term priming. We also present corpus data that verify a prediction of the model, that is, that the lexical boost affects all lexical material, rather than just heads. Copyright © 2011 Cognitive Science Society, Inc.
Students' use of atomic and molecular models in learning chemistry
NASA Astrophysics Data System (ADS)
O'Connor, Eileen Ann
1997-09-01
The objective of this study was to investigate the development of introductory college chemistry students' use of atomic and molecular models to explain physical and chemical phenomena. The study was conducted during the first semester of the course at a University and College II. Public institution (Carnegie Commission of Higher Education, 1973). Students' use of models was observed during one-on-one interviews conducted over the course of the semester. The approach to introductory chemistry emphasized models. Students were exposed to over two-hundred and fifty atomic and molecular models during lectures, were assigned text readings that used over a thousand models, and worked interactively with dozens of models on the computer. These models illustrated various features of the spatial organization of valence electrons and nuclei in atoms and molecules. Despite extensive exposure to models in lectures, in textbook, and in computer-based activities, the students in the study based their explanation in large part on a simple Bohr model (electrons arranged in concentric circles around the nuclei)--a model that had not been introduced in the course. Students used visual information from their models to construct their explanation, while overlooking inter-atomic and intra-molecular forces which are not represented explicitly in the models. In addition, students often explained phenomena by adding separate information about the topic without either integrating or logically relating this information into a cohesive explanation. The results of the study demonstrate that despite the extensive use of models in chemistry instruction, students do not necessarily apply them appropriately in explaining chemical and physical phenomena. The results of this study suggest that for the power of models as aids to learning to be more fully realized, chemistry professors must give more attention to the selection, use, integration, and limitations of models in their instruction.
Automated Tutoring in Interactive Environments: A Task-Centered Approach.
ERIC Educational Resources Information Center
Wolz, Ursula; And Others
1989-01-01
Discusses tutoring and consulting functions in interactive computer environments. Tutoring strategies are considered, the expert model and the user model are described, and GENIE (Generated Informative Explanations)--an answer generating system for the Berkeley Unix Mail system--is explained as an example of an automated consulting system. (33…
Modelling Cognitive Style in a Peer Help Network.
ERIC Educational Resources Information Center
Bull, Susan; McCalla, Gord
2002-01-01
Explains I-Help, a computer-based peer help network where students can ask and answer questions about assignments and courses based on the metaphor of a help desk. Highlights include cognitive style; user modeling in I-Help; matching helpers to helpees; and types of questions. (Contains 64 references.) (LRW)
Telecommunications and the Classroom: Where We've Been and Where We Should Be Going.
ERIC Educational Resources Information Center
Goldberg, Fred S.
1988-01-01
Discussion of the use of telecommunications highlights projects designed by the New York City Board of Education to investigate telecommunications alternatives for the classroom. Telecommunications systems models are described, including electronic bulletin boards and networking; and instructional models are explained, including computer mediated…
Multipulse control of saccadic eye movements
NASA Technical Reports Server (NTRS)
Lehman, S. L.; Stark, L.
1981-01-01
We present three conclusions regarding the neural control of saccadic eye movements, resulting from comparisons between recorded movements and computer simulations. The controller signal to the muscles is probably a multipulse-step. This kind of signal drives the fastest model trajectories. Finally, multipulse signals explain differences between model and electrophysiological results.
ERIC Educational Resources Information Center
Ramsey, Gregory W.
2010-01-01
This dissertation proposes and tests a theory explaining how people make decisions to achieve a goal in a specific task environment. The theory is represented as a computational model and implemented as a computer program. The task studied was primary care physicians treating patients with type 2 diabetes. Some physicians succeed in achieving…
ERIC Educational Resources Information Center
Vest, David; Tajchman, Ron
A study explained the manner in which a computer-assisted tutorial was built and assessed the utility of the courseware. The tutorial was designed to demonstrate the efficacy of good organization in informing the audience about a topic and provide appropriate models for the presentation of the well-organized informative speech. The topic of the…
A Synchronization Account of False Recognition
ERIC Educational Resources Information Center
Johns, Brendan T.; Jones, Michael N.; Mewhort, Douglas J. K.
2012-01-01
We describe a computational model to explain a variety of results in both standard and false recognition. A key attribute of the model is that it uses plausible semantic representations for words, built through exposure to a linguistic corpus. A study list is encoded in the model as a gist trace, similar to the proposal of fuzzy trace theory…
Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory
Gopnik, Alison; Wellman, Henry M.
2012-01-01
We propose a new version of the “theory theory” grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and non-technical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists. PMID:22582739
Computational models of epileptiform activity.
Wendling, Fabrice; Benquet, Pascal; Bartolomei, Fabrice; Jirsa, Viktor
2016-02-15
We reviewed computer models that have been developed to reproduce and explain epileptiform activity. Unlike other already-published reviews on computer models of epilepsy, the proposed overview starts from the various types of epileptiform activity encountered during both interictal and ictal periods. Computational models proposed so far in the context of partial and generalized epilepsies are classified according to the following taxonomy: neural mass, neural field, detailed network and formal mathematical models. Insights gained about interictal epileptic spikes and high-frequency oscillations, about fast oscillations at seizure onset, about seizure initiation and propagation, about spike-wave discharges and about status epilepticus are described. This review shows the richness and complementarity of the various modeling approaches as well as the fruitful contribution of the computational neuroscience community in the field of epilepsy research. It shows that models have progressively gained acceptance and are now considered as an efficient way of integrating structural, functional and pathophysiological data about neural systems into "coherent and interpretable views". The advantages, limitations and future of modeling approaches are discussed. Perspectives in epilepsy research and clinical epileptology indicate that very promising directions are foreseen, like model-guided experiments or model-guided therapeutic strategy, among others. Copyright © 2015 Elsevier B.V. All rights reserved.
Parametric instabilities of rotor-support systems with application to industrial ventilators
NASA Technical Reports Server (NTRS)
Parszewski, Z.; Krodkiemski, T.; Marynowski, K.
1980-01-01
Rotor support systems interaction with parametric excitation is considered for both unequal principal shaft stiffness (generators) and offset disc rotors (ventilators). Instability regions and types of instability are computed in the first case, and parametric resonances in the second case. Computed and experimental results are compared for laboratory machine models. A field case study of parametric vibrations in industrial ventilators is reported. Computed parametric resonances are confirmed in field measurements, and some industrial failures are explained. Also the dynamic influence and gyroscopic effect of supporting structures are shown and computed.
Are some CEMP-s stars the daughters of spinstars?
NASA Astrophysics Data System (ADS)
Choplin, Arthur; Hirschi, Raphael; Meynet, Georges; Ekström, Sylvia
2017-11-01
Carbon-enhanced metal-poor (CEMP)-s stars are long-lived low-mass stars with a very low iron content as well as overabundances of carbon and s-elements. Their peculiar chemical pattern is often explained by pollution from an asymptotic giant branch (AGB) star companion. Recent observations have shown that most CEMP-s stars are in binary systems, providing support to the AGB companion scenario. A few CEMP-s stars, however, appear to be single. We inspect four apparently single CEMP-s stars and discuss the possibility that they formed from the ejecta of a previous-generation massive star, referred to as the "source" star. In order to investigate this scenario, we computed low-metallicity massive-star models with and without rotation and including complete s-process nucleosynthesis. We find that non-rotating source stars cannot explain the observed abundance of any of the four CEMP-s stars. Three out of the four CEMP-s stars can be explained by a 25M⊙ source star with vini 500 km s-1 (spinstar). The fourth CEMP-s star has a high Pb abundance that cannot be explained by any of the models we computed. Since spinstars and AGB predict different ranges of [O/Fe] and [ls/hs], these ratios could be an interesting way to further test these two scenarios.
Schmidt, James R; De Houwer, Jan; Rothermund, Klaus
2016-12-01
The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
Tandem internal models execute motor learning in the cerebellum.
Honda, Takeru; Nagao, Soichi; Hashimoto, Yuji; Ishikawa, Kinya; Yokota, Takanori; Mizusawa, Hidehiro; Ito, Masao
2018-06-25
In performing skillful movement, humans use predictions from internal models formed by repetition learning. However, the computational organization of internal models in the brain remains unknown. Here, we demonstrate that a computational architecture employing a tandem configuration of forward and inverse internal models enables efficient motor learning in the cerebellum. The model predicted learning adaptations observed in hand-reaching experiments in humans wearing a prism lens and explained the kinetic components of these behavioral adaptations. The tandem system also predicted a form of subliminal motor learning that was experimentally validated after training intentional misses of hand targets. Patients with cerebellar degeneration disease showed behavioral impairments consistent with tandemly arranged internal models. These findings validate computational tandemization of internal models in motor control and its potential uses in more complex forms of learning and cognition. Copyright © 2018 the Author(s). Published by PNAS.
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke
2017-01-01
Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features other than object parts perform relatively poorly, perhaps because DNNs more comprehensively capture the colors, textures and contours which matter to human object perception. However, categorical models outperform DNNs, suggesting that further work may be needed to bring high-level semantic representations in DNNs closer to those extracted by humans. Modern DNNs explain similarity judgments remarkably well considering they were not trained on this task, and are promising models for many aspects of human cognition. PMID:29062291
NASA Astrophysics Data System (ADS)
Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.
2018-01-01
This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
ERIC Educational Resources Information Center
Elder, David M.; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3-dimensional virtual reality environment to determine the position of objects on the basis of motion discontinuities and computes heading direction,…
ERIC Educational Resources Information Center
Simmering, Vanessa R.; Patterson, Rebecca
2012-01-01
Numerous studies have established that visual working memory has a limited capacity that increases during childhood. However, debate continues over the source of capacity limits and its developmental increase. Simmering (2008) adapted a computational model of spatial cognitive development, the Dynamic Field Theory, to explain not only the source…
Cognitive Modeling of Individual Variation in Reference Production and Comprehension
Hendriks, Petra
2016-01-01
A challenge for most theoretical and computational accounts of linguistic reference is the observation that language users vary considerably in their referential choices. Part of the variation observed among and within language users and across tasks may be explained from variation in the cognitive resources available to speakers and listeners. This paper presents a computational model of reference production and comprehension developed within the cognitive architecture ACT-R. Through simulations with this ACT-R model, it is investigated how cognitive constraints interact with linguistic constraints and features of the linguistic discourse in speakers’ production and listeners’ comprehension of referring expressions in specific tasks, and how this interaction may give rise to variation in referential choice. The ACT-R model of reference explains and predicts variation among language users in their referential choices as a result of individual and task-related differences in processing speed and working memory capacity. Because of limitations in their cognitive capacities, speakers sometimes underspecify or overspecify their referring expressions, and listeners sometimes choose incorrect referents or are overly liberal in their interpretation of referring expressions. PMID:27092101
NASA Astrophysics Data System (ADS)
Purwins, Hendrik; Herrera, Perfecto; Grachten, Maarten; Hazan, Amaury; Marxer, Ricard; Serra, Xavier
2008-09-01
We present a review on perception and cognition models designed for or applicable to music. An emphasis is put on computational implementations. We include findings from different disciplines: neuroscience, psychology, cognitive science, artificial intelligence, and musicology. The article summarizes the methodology that these disciplines use to approach the phenomena of music understanding, the localization of musical processes in the brain, and the flow of cognitive operations involved in turning physical signals into musical symbols, going from the transducers to the memory systems of the brain. We discuss formal models developed to emulate, explain and predict phenomena involved in early auditory processing, pitch processing, grouping, source separation, and music structure computation. We cover generic computational architectures of attention, memory, and expectation that can be instantiated and tuned to deal with specific musical phenomena. Criteria for the evaluation of such models are presented and discussed. Thereby, we lay out the general framework that provides the basis for the discussion of domain-specific music models in Part II.
2017-01-01
Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274, 1926–1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105, 2745–2750; Thiessen & Yee 2010 Child Development 81, 1287–1303; Saffran 2002 Journal of Memory and Language 47, 172–196; Misyak & Christiansen 2012 Language Learning 62, 302–331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39, 246–263; Thiessen et al. 2013 Psychological Bulletin 139, 792–814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37, 310–343). This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences'. PMID:27872374
Thiessen, Erik D
2017-01-05
Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37: , 310-343).This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
A survey of real face modeling methods
NASA Astrophysics Data System (ADS)
Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng
2017-09-01
The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.
An ambient agent model for analyzing managers' performance during stress
NASA Astrophysics Data System (ADS)
ChePa, Noraziah; Aziz, Azizi Ab; Gratim, Haned
2016-08-01
Stress at work have been reported everywhere. Work related performance during stress is a pattern of reactions that occurs when managers are presented with work demands that are not matched with their knowledge, skills, or abilities, and which challenge their ability to cope. Although there are many prior findings pertaining to explain the development of manager performance during stress, less attention has been given to explain the same concept through computational models. In such, a descriptive nature in psychological theories about managers' performance during stress can be transformed into a causal-mechanistic stage that explains the relationship between a series of observed phenomena. This paper proposed an ambient agent model for analyzing managers' performance during stress. Set of properties and variables are identified through past literatures to construct the model. Differential equations have been used in formalizing the model. Set of equations reflecting relations involved in the proposed model are presented. The proposed model is essential and can be encapsulated within an intelligent agent or robots that can be used to support managers during stress.
ERIC Educational Resources Information Center
Hunt, Charles R.
A study developed a model to assist school administrators to estimate costs associated with the delivery of a metals cluster program at Norfolk State College, Virginia. It sought to construct the model so that costs could be explained as a function of enrollment levels. Data were collected through a literature review, computer searches of the…
Categorization-based stranger avoidance does not explain the uncanny valley effect.
MacDorman, Karl F; Chattopadhyay, Debaleena
2017-04-01
The uncanny valley hypothesis predicts that an entity appearing almost human risks eliciting cold, eerie feelings in viewers. Categorization-based stranger avoidance theory identifies the cause of this feeling as categorizing the entity into a novel category. This explanation is doubtful because stranger is not a novel category in adults; infants do not avoid strangers while the category stranger remains novel; infants old enough to fear strangers prefer photographs of strangers to those more closely resembling a familiar person; and the uncanny valley's characteristic eeriness is seldom felt when meeting strangers. We repeated our original experiment with a more realistic 3D computer model and found no support for categorization-based stranger avoidance theory. By contrast, realism inconsistency theory explains cold, eerie feelings elicited by transitions between instances of two different, mutually exclusive categories, given that at least one category is anthropomorphic: Cold, eerie feelings are caused by prediction error from perceiving some features as features of the first category and other features as features of the second category. In principle, realism inconsistency theory can explain not only negative evaluations of transitions between real and computer modeled humans but also between different vertebrate species. Copyright © 2017 Elsevier B.V. All rights reserved.
A First Approach to Filament Dynamics
ERIC Educational Resources Information Center
Silva, P. E. S.; de Abreu, F. Vistulo; Simoes, R.; Dias, R. G.
2010-01-01
Modelling elastic filament dynamics is a topic of high interest due to the wide range of applications. However, it has reached a high level of complexity in the literature, making it unaccessible to a beginner. In this paper we explain the main steps involved in the computational modelling of the dynamics of an elastic filament. We first derive…
Lobo, Daniel; Levin, Michael
2015-01-01
Transformative applications in biomedicine require the discovery of complex regulatory networks that explain the development and regeneration of anatomical structures, and reveal what external signals will trigger desired changes of large-scale pattern. Despite recent advances in bioinformatics, extracting mechanistic pathway models from experimental morphological data is a key open challenge that has resisted automation. The fundamental difficulty of manually predicting emergent behavior of even simple networks has limited the models invented by human scientists to pathway diagrams that show necessary subunit interactions but do not reveal the dynamics that are sufficient for complex, self-regulating pattern to emerge. To finally bridge the gap between high-resolution genetic data and the ability to understand and control patterning, it is critical to develop computational tools to efficiently extract regulatory pathways from the resultant experimental shape phenotypes. For example, planarian regeneration has been studied for over a century, but despite increasing insight into the pathways that control its stem cells, no constructive, mechanistic model has yet been found by human scientists that explains more than one or two key features of its remarkable ability to regenerate its correct anatomical pattern after drastic perturbations. We present a method to infer the molecular products, topology, and spatial and temporal non-linear dynamics of regulatory networks recapitulating in silico the rich dataset of morphological phenotypes resulting from genetic, surgical, and pharmacological experiments. We demonstrated our approach by inferring complete regulatory networks explaining the outcomes of the main functional regeneration experiments in the planarian literature; By analyzing all the datasets together, our system inferred the first systems-biology comprehensive dynamical model explaining patterning in planarian regeneration. This method provides an automated, highly generalizable framework for identifying the underlying control mechanisms responsible for the dynamic regulation of growth and form. PMID:26042810
The Viability of Distance Education Science Laboratories.
ERIC Educational Resources Information Center
Forinash, Kyle; Wisman, Raymond
2001-01-01
Discusses the effectiveness of offering science laboratories via distance education. Explains current delivery technologies, including computer simulations, videos, and laboratory kits sent to students; pros and cons of distance labs; the use of spreadsheets; and possibilities for new science education models. (LRW)
Exact computation of the maximum-entropy potential of spiking neural-network models.
Cofré, R; Cessac, B
2014-05-01
Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.
Computing by physical interaction in neurons.
Aur, Dorian; Jog, Mandar; Poznanski, Roman R
2011-12-01
The electrodynamics of action potentials represents the fundamental level where information is integrated and processed in neurons. The Hodgkin-Huxley model cannot explain the non-stereotyped spatial charge density dynamics that occur during action potential propagation. Revealed in experiments as spike directivity, the non-uniform charge density dynamics within neurons carry meaningful information and suggest that fragments of information regarding our memories are endogenously stored in structural patterns at a molecular level and are revealed only during spiking activity. The main conceptual idea is that under the influence of electric fields, efficient computation by interaction occurs between charge densities embedded within molecular structures and the transient developed flow of electrical charges. This process of computation underlying electrical interactions and molecular mechanisms at the subcellular level is dissimilar from spiking neuron models that are completely devoid of physical interactions. Computation by interaction describes a more powerful continuous model of computation than the one that consists of discrete steps as represented in Turing machines.
A computational cognitive model of self-efficacy and daily adherence in mHealth.
Pirolli, Peter
2016-12-01
Mobile health (mHealth) applications provide an excellent opportunity for collecting rich, fine-grained data necessary for understanding and predicting day-to-day health behavior change dynamics. A computational predictive model (ACT-R-DStress) is presented and fit to individual daily adherence in 28-day mHealth exercise programs. The ACT-R-DStress model refines the psychological construct of self-efficacy. To explain and predict the dynamics of self-efficacy and predict individual performance of targeted behaviors, the self-efficacy construct is implemented as a theory-based neurocognitive simulation of the interaction of behavioral goals, memories of past experiences, and behavioral performance.
Computational studies of photoluminescence from disordered nanocrystalline systems
NASA Astrophysics Data System (ADS)
John, George
2000-03-01
The size (d) dependence of emission energies from semiconductor nanocrystallites have been shown to follow an effective exponent ( d^-β) determined by the disorder in the system(V.Ranjan, V.A.Singh and G.C.John, Phys. Rev B 58), 1158 (1998). Our earlier calculation was based on a simple quantum confinement model assuming a normal distribution of crystallites. This model is now extended to study the effects of realistic systems with a lognormal distribution in particle size, accounting for carrier hopping and nonradiative transitions. Computer simulations of this model performed using the Microcal Origin software can explain several conflicting experimental results reported in literature.
An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon
NASA Technical Reports Server (NTRS)
Rutherford, Brian
2000-01-01
The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.
NASA Astrophysics Data System (ADS)
Develaki, Maria
2017-11-01
Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Computational Biochemistry-Enzyme Mechanisms Explored.
Culka, Martin; Gisdon, Florian J; Ullmann, G Matthias
2017-01-01
Understanding enzyme mechanisms is a major task to achieve in order to comprehend how living cells work. Recent advances in biomolecular research provide huge amount of data on enzyme kinetics and structure. The analysis of diverse experimental results and their combination into an overall picture is, however, often challenging. Microscopic details of the enzymatic processes are often anticipated based on several hints from macroscopic experimental data. Computational biochemistry aims at creation of a computational model of an enzyme in order to explain microscopic details of the catalytic process and reproduce or predict macroscopic experimental findings. Results of such computations are in part complementary to experimental data and provide an explanation of a biochemical process at the microscopic level. In order to evaluate the mechanism of an enzyme, a structural model is constructed which can be analyzed by several theoretical approaches. Several simulation methods can and should be combined to get a reliable picture of the process of interest. Furthermore, abstract models of biological systems can be constructed combining computational and experimental data. In this review, we discuss structural computational models of enzymatic systems. We first discuss various models to simulate enzyme catalysis. Furthermore, we review various approaches how to characterize the enzyme mechanism both qualitatively and quantitatively using different modeling approaches. © 2017 Elsevier Inc. All rights reserved.
Mesoscopic model of actin-based propulsion.
Zhu, Jie; Mogilner, Alex
2012-01-01
Two theoretical models dominate current understanding of actin-based propulsion: microscopic polymerization ratchet model predicts that growing and writhing actin filaments generate forces and movements, while macroscopic elastic propulsion model suggests that deformation and stress of growing actin gel are responsible for the propulsion. We examine both experimentally and computationally the 2D movement of ellipsoidal beads propelled by actin tails and show that neither of the two models can explain the observed bistability of the orientation of the beads. To explain the data, we develop a 2D hybrid mesoscopic model by reconciling these two models such that individual actin filaments undergoing nucleation, elongation, attachment, detachment and capping are embedded into the boundary of a node-spring viscoelastic network representing the macroscopic actin gel. Stochastic simulations of this 'in silico' actin network show that the combined effects of the macroscopic elastic deformation and microscopic ratchets can explain the observed bistable orientation of the actin-propelled ellipsoidal beads. To test the theory further, we analyze observed distribution of the curvatures of the trajectories and show that the hybrid model's predictions fit the data. Finally, we demonstrate that the model can explain both concave-up and concave-down force-velocity relations for growing actin networks depending on the characteristic time scale and network recoil. To summarize, we propose that both microscopic polymerization ratchets and macroscopic stresses of the deformable actin network are responsible for the force and movement generation.
Computational neurorehabilitation: modeling plasticity and learning to predict recovery.
Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas
2016-04-30
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.
Modeling Visual, Vestibular and Oculomotor Interactions in Self-Motion Estimation
NASA Technical Reports Server (NTRS)
Perrone, John
1997-01-01
A computational model of human self-motion perception has been developed in collaboration with Dr. Leland S. Stone at NASA Ames Research Center. The research included in the grant proposal sought to extend the utility of this model so that it could be used for explaining and predicting human performance in a greater variety of aerospace applications. This extension has been achieved along with physiological validation of the basic operation of the model.
ERIC Educational Resources Information Center
Sesn, Burcin Acar
2013-01-01
The purpose of this study was to investigate pre-service science teachers' understanding of surface tension, cohesion and adhesion forces by using computer-mediated predict-observe-explain tasks. 22 third-year pre-service science teachers participated in this study. Three computer-mediated predict-observe-explain tasks were developed and applied…
Cooperation, Technology, and Performance: A Case Study.
ERIC Educational Resources Information Center
Cavanagh, Thomas; Dickenson, Sabrina; Brandt, Suzanne
1999-01-01
Describes the CTP (Cooperation, Technology, and Performance) model and explains how it is used by the Department of Veterans Affairs-Veteran's Benefit Administration (VBA) for training. Discusses task analysis; computer-based training; cooperative-based learning environments; technology-based learning; performance-assessment methods; courseware…
Jorge-Botana, Guillermo; Olmos, Ricardo; Luzón, José M
2018-01-01
The aim of this paper is to describe and explain one useful computational methodology to model the semantic development of word representation: Word maturity. In particular, the methodology is based on the longitudinal word monitoring created by Kirylev and Landauer using latent semantic analysis for the representation of lexical units. The paper is divided into two parts. First, the steps required to model the development of the meaning of words are explained in detail. We describe the technical and theoretical aspects of each step. Second, we provide a simple example of application of this methodology with some simple tools that can be used by applied researchers. This paper can serve as a user-friendly guide for researchers interested in modeling changes in the semantic representations of words. Some current aspects of the technique and future directions are also discussed. WIREs Cogn Sci 2018, 9:e1457. doi: 10.1002/wcs.1457 This article is categorized under: Computer Science > Natural Language Processing Linguistics > Language Acquisition Psychology > Development and Aging. © 2017 Wiley Periodicals, Inc.
Yan, Mian; Or, Calvin
2017-08-01
This study tested a structural model examining the effects of perceived usefulness, perceived ease of use, attitude, subjective norm, perceived behavioral control, health consciousness, and application-specific self-efficacy on the acceptance (i.e. behavioral intention and actual usage) of a computer-based chronic disease self-monitoring system among patients with type 2 diabetes mellitus and/or hypertension. The model was tested using partial least squares structural equation modeling, with 119 observations that were obtained by pooling data across three time points over a 12-week period. The results indicate that all of the seven constructs examined had a significant total effect on behavioral intention and explained 74 percent of the variance. Also, application-specific self-efficacy and behavioral intention had a significant total effect on actual usage and explained 17 percent of the variance. This study demonstrates that technology acceptance is determined by patient characteristics, technology attributes, and social influences. Applying the findings may increase the likelihood of acceptance.
Computational Modeling of Morphological Effects in Bangla Visual Word Recognition.
Dasgupta, Tirthankar; Sinha, Manjira; Basu, Anupam
2015-10-01
In this paper we aim to model the organization and processing of Bangla polymorphemic words in the mental lexicon. Our objective is to determine whether the mental lexicon accesses a polymorphemic word as a whole or decomposes the word into its constituent morphemes and then recognize them accordingly. To address this issue, we adopted two different strategies. First, we conduct a masked priming experiment over native speakers. Analysis of reaction time (RT) and error rates indicates that in general, morphologically derived words are accessed via decomposition process. Next, based on the collected RT data we have developed a computational model that can explain the processing phenomena of the access and representation of Bangla derivationally suffixed words. In order to do so, we first explored the individual roles of different linguistic features of a Bangla morphologically complex word and observed that processing of Bangla morphologically complex words depends upon several factors like, the base and surface word frequency, suffix type/token ratio, suffix family size and suffix productivity. Accordingly, we have proposed different feature models. Finally, we combine these feature models together and came up with a new model that takes the advantage of the individual feature models and successfully explain the processing phenomena of most of the Bangla morphologically derived words. Our proposed model shows an accuracy of around 80% which outperforms the other related frequency models.
Language and Cognition Interaction Neural Mechanisms
Perlovsky, Leonid
2011-01-01
How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language “ready-made” and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a “teacher.” A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's “language prewired brain” built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures. PMID:21876687
MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
2016-08-03
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
A System for Natural Language Sentence Generation.
ERIC Educational Resources Information Center
Levison, Michael; Lessard, Gregory
1992-01-01
Describes the natural language computer program, "Vinci." Explains that using an attribute grammar formalism, Vinci can simulate components of several current linguistic theories. Considers the design of the system and its applications in linguistic modelling and second language acquisition research. Notes Vinci's uses in linguistics…
A Thermodynamic System Analysis Model of a Diesel Engine.
1985-10-16
the computation as explained above and the spectral reflectivities, . are included in the expression for the radiosity of surface 1, B Ia las (I...Dr. David M. Mann). NOMENCLATURE A band absorptance a constant determining species distribution B radiosity b constant determining species
Computing chemical organizations in biological networks.
Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter
2008-07-15
Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.
Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits
NASA Technical Reports Server (NTRS)
Driscoll, James F.; Feikema, Douglas A.
2003-01-01
This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.
The Snowmelt-Runoff Model (SRM) user's manual
NASA Technical Reports Server (NTRS)
Martinec, J.; Rango, A.; Major, E.
1983-01-01
A manual to provide a means by which a user may apply the snowmelt runoff model (SRM) unaided is presented. Model structure, conditions of application, and data requirements, including remote sensing, are described. Guidance is given for determining various model variables and parameters. Possible sources of error are discussed and conversion of snowmelt runoff model (SRM) from the simulation mode to the operational forecasting mode is explained. A computer program is presented for running SRM is easily adaptable to most systems used by water resources agencies.
Computer Bytes, Viruses and Vaccines.
ERIC Educational Resources Information Center
Palmore, Teddy B.
1989-01-01
Presents a history of computer viruses, explains various types of viruses and how they affect software or computer operating systems, and describes examples of specific viruses. Available vaccines are explained, and precautions for protecting programs and disks are given. (nine references) (LRW)
NASA Astrophysics Data System (ADS)
Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration
2014-06-01
The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.
A Diffusion Model for Two-sided Service Systems
NASA Astrophysics Data System (ADS)
Homma, Koichi; Yano, Koujin; Funabashi, Motohisa
A diffusion model is proposed for two-sided service systems. ‘Two-sided’ refers to the existence of an economic network effect between two different and interrelated groups, e.g., card holders and merchants in an electronic money service. The service benefit for a member of one side depends on the number and quality of the members on the other side. A mathematical model by J. H. Rohlfs explains the network (or bandwagon) effect of communications services. In Rohlfs' model, only the users' group exists and the model is one-sided. This paper extends Rohlfs' model to a two-sided model. We propose, first, a micro model that explains individual behavior in regard to service subscription of both sides and a computational method that drives the proposed model. Second, we develop macro models with two diffusion-rate variables by simplifying the micro model. As a case study, we apply the models to an electronic money service and discuss the simulation results and actual statistics.
Thrombosis in Cerebral Aneurysms and the Computational Modeling Thereof: A Review
Ngoepe, Malebogo N.; Frangi, Alejandro F.; Byrne, James V.; Ventikos, Yiannis
2018-01-01
Thrombosis is a condition closely related to cerebral aneurysms and controlled thrombosis is the main purpose of endovascular embolization treatment. The mechanisms governing thrombus initiation and evolution in cerebral aneurysms have not been fully elucidated and this presents challenges for interventional planning. Significant effort has been directed towards developing computational methods aimed at streamlining the interventional planning process for unruptured cerebral aneurysm treatment. Included in these methods are computational models of thrombus development following endovascular device placement. The main challenge with developing computational models for thrombosis in disease cases is that there exists a wide body of literature that addresses various aspects of the clotting process, but it may not be obvious what information is of direct consequence for what modeling purpose (e.g., for understanding the effect of endovascular therapies). The aim of this review is to present the information so it will be of benefit to the community attempting to model cerebral aneurysm thrombosis for interventional planning purposes, in a simplified yet appropriate manner. The paper begins by explaining current understanding of physiological coagulation and highlights the documented distinctions between the physiological process and cerebral aneurysm thrombosis. Clinical observations of thrombosis following endovascular device placement are then presented. This is followed by a section detailing the demands placed on computational models developed for interventional planning. Finally, existing computational models of thrombosis are presented. This last section begins with description and discussion of physiological computational clotting models, as they are of immense value in understanding how to construct a general computational model of clotting. This is then followed by a review of computational models of clotting in cerebral aneurysms, specifically. Even though some progress has been made towards computational predictions of thrombosis following device placement in cerebral aneurysms, many gaps still remain. Answering the key questions will require the combined efforts of the clinical, experimental and computational communities. PMID:29670533
Thrombosis in Cerebral Aneurysms and the Computational Modeling Thereof: A Review.
Ngoepe, Malebogo N; Frangi, Alejandro F; Byrne, James V; Ventikos, Yiannis
2018-01-01
Thrombosis is a condition closely related to cerebral aneurysms and controlled thrombosis is the main purpose of endovascular embolization treatment. The mechanisms governing thrombus initiation and evolution in cerebral aneurysms have not been fully elucidated and this presents challenges for interventional planning. Significant effort has been directed towards developing computational methods aimed at streamlining the interventional planning process for unruptured cerebral aneurysm treatment. Included in these methods are computational models of thrombus development following endovascular device placement. The main challenge with developing computational models for thrombosis in disease cases is that there exists a wide body of literature that addresses various aspects of the clotting process, but it may not be obvious what information is of direct consequence for what modeling purpose (e.g., for understanding the effect of endovascular therapies). The aim of this review is to present the information so it will be of benefit to the community attempting to model cerebral aneurysm thrombosis for interventional planning purposes, in a simplified yet appropriate manner. The paper begins by explaining current understanding of physiological coagulation and highlights the documented distinctions between the physiological process and cerebral aneurysm thrombosis. Clinical observations of thrombosis following endovascular device placement are then presented. This is followed by a section detailing the demands placed on computational models developed for interventional planning. Finally, existing computational models of thrombosis are presented. This last section begins with description and discussion of physiological computational clotting models, as they are of immense value in understanding how to construct a general computational model of clotting. This is then followed by a review of computational models of clotting in cerebral aneurysms, specifically. Even though some progress has been made towards computational predictions of thrombosis following device placement in cerebral aneurysms, many gaps still remain. Answering the key questions will require the combined efforts of the clinical, experimental and computational communities.
Neural correlates of forward planning in a spatial decision task in humans
Simon, Dylan Alexander; Daw, Nathaniel D.
2011-01-01
Although reinforcement learning (RL) theories have been influential in characterizing the brain’s mechanisms for reward-guided choice, the predominant temporal difference (TD) algorithm cannot explain many flexible or goal-directed actions that have been demonstrated behaviorally. We investigate such actions by contrasting an RL algorithm that is model-based, in that it relies on learning a map or model of the task and planning within it, to traditional model-free TD learning. To distinguish these approaches in humans, we used fMRI in a continuous spatial navigation task, in which frequent changes to the layout of the maze forced subjects continually to relearn their favored routes, thereby exposing the RL mechanisms employed. We sought evidence for the neural substrates of such mechanisms by comparing choice behavior and BOLD signals to decision variables extracted from simulations of either algorithm. Both choices and value-related BOLD signals in striatum, though most often associated with TD learning, were better explained by the model-based theory. Further, predecessor quantities for the model-based value computation were correlated with BOLD signals in the medial temporal lobe and frontal cortex. These results point to a significant extension of both the computational and anatomical substrates for RL in the brain. PMID:21471389
Computational Models of the Representation of Bangla Compound Words in the Mental Lexicon.
Dasgupta, Tirthankar; Sinha, Manjira; Basu, Anupam
2016-08-01
In this paper we aim to model the organization and processing of Bangla compound words in the mental lexicon. Our objective is to determine whether the mental lexicon access a Bangla compound word as a whole or decomposes the whole word into its constituent morphemes and then recognize them accordingly. To address this issue, we adopted two different strategies. First, we conduct a cross-modal priming experiment over a number of native speakers. Analysis of reaction time (RT) and error rates indicates that in general, Bangla compound words are accessed via partial decomposition process. That is some word follows full-listing mode of representation and some words follow the decomposition route of representation. Next, based on the collected RT data we have developed a computational model that can explain the processing phenomena of the access and representation of Bangla compound words. In order to achieve this, we first explored the individual roles of head word position, morphological complexity, orthographic transparency and semantic compositionality between the constituents and the whole compound word. Accordingly, we have developed a complexity based model by combining these features together. To a large extent we have successfully explained the possible processing phenomena of most of the Bangla compound words. Our proposed model shows an accuracy of around 83 %.
Intermittent control: a computational theory of human control.
Gawthrop, Peter; Loram, Ian; Lakie, Martin; Gollee, Henrik
2011-02-01
The paradigm of continuous control using internal models has advanced understanding of human motor control. However, this paradigm ignores some aspects of human control, including intermittent feedback, serial ballistic control, triggered responses and refractory periods. It is shown that event-driven intermittent control provides a framework to explain the behaviour of the human operator under a wider range of conditions than continuous control. Continuous control is included as a special case, but sampling, system matched hold, an intermittent predictor and an event trigger allow serial open-loop trajectories using intermittent feedback. The implementation here may be described as "continuous observation, intermittent action". Beyond explaining unimodal regulation distributions in common with continuous control, these features naturally explain refractoriness and bimodal stabilisation distributions observed in double stimulus tracking experiments and quiet standing, respectively. Moreover, given that human control systems contain significant time delays, a biological-cybernetic rationale favours intermittent over continuous control: intermittent predictive control is computationally less demanding than continuous predictive control. A standard continuous-time predictive control model of the human operator is used as the underlying design method for an event-driven intermittent controller. It is shown that when event thresholds are small and sampling is regular, the intermittent controller can masquerade as the underlying continuous-time controller and thus, under these conditions, the continuous-time and intermittent controller cannot be distinguished. This explains why the intermittent control hypothesis is consistent with the continuous control hypothesis for certain experimental conditions.
Virtual Control Systems Environment (VCSE)
Atkins, Will
2018-02-14
Will Atkins, a Sandia National Laboratories computer engineer discusses cybersecurity research work for process control systems. Will explains his work on the Virtual Control Systems Environment project to develop a modeling and simulation framework of the U.S. electric grid in order to study and mitigate possible cyberattacks on infrastructure.
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1979-01-01
A computer program which can distinguish between different receiver designs, and predict transient performance under variable solar flux, or ambient temperatures, etc. has a basic structure that fits a general heat transfer problem, but with specific features that are custom-made for solar receivers. The code is written in MBASIC computer language. The methodology followed in solving the heat transfer problem is explained. A program flow chart, an explanation of input and output tables, and an example of the simulation of a cavity-type solar receiver are included.
Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection
Jones, Douglas E.; Dorman, Karin S.
2009-01-01
Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088
Forward modelling requires intention recognition and non-impoverished predictions.
de Ruiter, Jan P; Cummins, Chris
2013-08-01
We encourage Pickering & Garrod (P&G) to implement this promising theory in a computational model. The proposed theory crucially relies on having an efficient and reliable mechanism for early intention recognition. Furthermore, the generation of impoverished predictions is incompatible with a number of key phenomena that motivated P&G's theory. Explaining these phenomena requires fully specified perceptual predictions in both comprehension and production.
Nonlinear computations shaping temporal processing of precortical vision.
Butts, Daniel A; Cui, Yuwei; Casti, Alexander R R
2016-09-01
Computations performed by the visual pathway are constructed by neural circuits distributed over multiple stages of processing, and thus it is challenging to determine how different stages contribute on the basis of recordings from single areas. In the current article, we address this problem in the lateral geniculate nucleus (LGN), using experiments combined with nonlinear modeling capable of isolating various circuit contributions. We recorded cat LGN neurons presented with temporally modulated spots of various sizes, which drove temporally precise LGN responses. We utilized simultaneously recorded S-potentials, corresponding to the primary retinal ganglion cell (RGC) input to each LGN cell, to distinguish the computations underlying temporal precision in the retina from those in the LGN. Nonlinear models with excitatory and delayed suppressive terms were sufficient to explain temporal precision in the LGN, and we found that models of the S-potentials were nearly identical, although with a lower threshold. To determine whether additional influences shaped the response at the level of the LGN, we extended this model to use the S-potential input in combination with stimulus-driven terms to predict the LGN response. We found that the S-potential input "explained away" the major excitatory and delayed suppressive terms responsible for temporal patterning of LGN spike trains but revealed additional contributions, largely PULL suppression, to the LGN response. Using this novel combination of recordings and modeling, we were thus able to dissect multiple circuit contributions to LGN temporal responses across retina and LGN, and set the foundation for targeted study of each stage. Copyright © 2016 the American Physiological Society.
A computational model of self-efficacy's various effects on performance: Moving the debate forward.
Vancouver, Jeffrey B; Purl, Justin D
2017-04-01
Self-efficacy, which is one's belief in one's capacity, has been found to both positively and negatively influence effort and performance. The reasons for these different effects have been a major topic of debate among social-cognitive and perceptual control theorists. In particular, the findings of various self-efficacy effects has been motivated by a perceptual control theory view of self-regulation that social-cognitive theorists' question. To provide more clarity to the theoretical arguments, a computational model of the multiple processes presumed to create the positive, negative, and null effects for self-efficacy is presented. Building on an existing computational model of goal choice that produces a positive effect for self-efficacy, the current article adds a symbolic processing structure used during goal striving that explains the negative self-efficacy effect observed in recent studies. Moreover, the multiple processes, operating together, allow the model to recreate the various effects found in a published study of feedback ambiguity's moderating role on the self-efficacy to performance relationship (Schmidt & DeShon, 2010). Discussion focuses on the implications of the model for the self-efficacy debate, alternative computational models, the overlap between control theory and social-cognitive theory explanations, the value of using computational models for resolving theoretical disputes, and future research and directions the model inspires. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Dollé, Laurent; Chavarriaga, Ricardo
2018-01-01
We present a computational model of spatial navigation comprising different learning mechanisms in mammals, i.e., associative, cognitive mapping and parallel systems. This model is able to reproduce a large number of experimental results in different variants of the Morris water maze task, including standard associative phenomena (spatial generalization gradient and blocking), as well as navigation based on cognitive mapping. Furthermore, we show that competitive and cooperative patterns between different navigation strategies in the model allow to explain previous apparently contradictory results supporting either associative or cognitive mechanisms for spatial learning. The key computational mechanism to reconcile experimental results showing different influences of distal and proximal cues on the behavior, different learning times, and different abilities of individuals to alternatively perform spatial and response strategies, relies in the dynamic coordination of navigation strategies, whose performance is evaluated online with a common currency through a modular approach. We provide a set of concrete experimental predictions to further test the computational model. Overall, this computational work sheds new light on inter-individual differences in navigation learning, and provides a formal and mechanistic approach to test various theories of spatial cognition in mammals. PMID:29630600
Mental maps and travel behaviour: meanings and models
NASA Astrophysics Data System (ADS)
Hannes, Els; Kusumastuti, Diana; Espinosa, Maikel León; Janssens, Davy; Vanhoof, Koen; Wets, Geert
2012-04-01
In this paper, the " mental map" concept is positioned with regard to individual travel behaviour to start with. Based on Ogden and Richards' triangle of meaning (The meaning of meaning: a study of the influence of language upon thought and of the science of symbolism. International library of psychology, philosophy and scientific method. Routledge and Kegan Paul, London, 1966) distinct thoughts, referents and symbols originating from different scientific disciplines are identified and explained in order to clear up the notion's fuzziness. Next, the use of this concept in two major areas of research relevant to travel demand modelling is indicated and discussed in detail: spatial cognition and decision-making. The relevance of these constructs to understand and model individual travel behaviour is explained and current research efforts to implement these concepts in travel demand models are addressed. Furthermore, these mental map notions are specified in two types of computational models, i.e. a Bayesian Inference Network (BIN) and a Fuzzy Cognitive Map (FCM). Both models are explained, and a numerical and a real-life example are provided. Both approaches yield a detailed quantitative representation of the mental map of decision-making problems in travel behaviour.
Simulation and Modeling in High Entropy Alloys
NASA Astrophysics Data System (ADS)
Toda-Caraballo, I.; Wróbel, J. S.; Nguyen-Manh, D.; Pérez, P.; Rivera-Díaz-del-Castillo, P. E. J.
2017-11-01
High entropy alloys (HEAs) is a fascinating field of research, with an increasing number of new alloys discovered. This would hardly be conceivable without the aid of materials modeling and computational alloy design to investigate the immense compositional space. The simplicity of the microstructure achieved contrasts with the enormous complexity of its composition, which, in turn, increases the variety of property behavior observed. Simulation and modeling techniques are of paramount importance in the understanding of such material performance. There are numerous examples of how different models have explained the observed experimental results; yet, there are theories and approaches developed for conventional alloys, where the presence of one element is predominant, that need to be adapted or re-developed. In this paper, we review of the current state of the art of the modeling techniques applied to explain HEAs properties, identifying the potential new areas of research to improve the predictability of these techniques.
Use of Technology in the Household: An Exploratory Study
ERIC Educational Resources Information Center
Jackson, Barcus C.
2010-01-01
Since the 1980s, personal computer ownership has become ubiquitous, and people are increasingly using household technologies for a wide variety of purposes. Extensive research has resulted in useful models to explain workplace technology acceptance and household technology adoption. Studies have also found that the determinants underlying…
A Dynamic, Stochastic, Computational Model of Preference Reversal Phenomena
ERIC Educational Resources Information Center
Johnson, Joseph G.; Busemeyer, Jerome R.
2005-01-01
Preference orderings among a set of options may depend on the elicitation method (e.g., choice or pricing); these preference reversals challenge traditional decision theories. Previous attempts to explain these reversals have relied on allowing utility of the options to change across elicitation methods by changing the decision weights, the…
Metaphor, computing systems, and active learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, J.M.; Mack, R.L.
1982-01-01
The authors discuss the learning process that is directed towards particular goals and is initiated by the learner, through which metaphors become relevant and effective in learning. This allows an analysis of metaphors that explains why metaphors are incomplete and open-ended, and how this stimulates the construction of mental models. 9 references.
NASA Astrophysics Data System (ADS)
Pécoul, S.; Heuraux, S.; Koch, R.; Leclert, G.; Bécoulet, A.; Colas, L.
1999-09-01
Self-consistent calculations of the 3D electric field patterns between the screen and the plasma have been made with the ICANT code for realistic antennas. Here we explain how the ICRH antennas of the Tore Supra tokamak are modelled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecoul, S.; Heuraux, S.; Koch, R.
1999-09-20
Self-consistent calculations of the 3D electric field patterns between the screen and the plasma have been made with the ICANT code for realistic antennas. Here we explain how the ICRH antennas of the Tore Supra tokamak are modelled.
Challenging Density Functional Theory Calculations with Hemes and Porphyrins.
de Visser, Sam P; Stillman, Martin J
2016-04-07
In this paper we review recent advances in computational chemistry and specifically focus on the chemical description of heme proteins and synthetic porphyrins that act as both mimics of natural processes and technological uses. These are challenging biochemical systems involved in electron transfer as well as biocatalysis processes. In recent years computational tools have improved considerably and now can reproduce experimental spectroscopic and reactivity studies within a reasonable error margin (several kcal·mol(-1)). This paper gives recent examples from our groups, where we investigated heme and synthetic metal-porphyrin systems. The four case studies highlight how computational modelling can correctly reproduce experimental product distributions, predicted reactivity trends and guide interpretation of electronic structures of complex systems. The case studies focus on the calculations of a variety of spectroscopic features of porphyrins and show how computational modelling gives important insight that explains the experimental spectra and can lead to the design of porphyrins with tuned properties.
Temporal-logic analysis of microglial phenotypic conversion with exposure to amyloid-β.
Anastasio, Thomas J
2015-02-01
Alzheimer Disease (AD) remains a leading killer with no adequate treatment. Ongoing research increasingly implicates the brain's immune system as a critical contributor to AD pathogenesis, but the complexity of the immune contribution poses a barrier to understanding. Here I use temporal logic to analyze a computational specification of the immune component of AD. Temporal logic is an extension of logic to propositions expressed in terms of time. It has traditionally been used to analyze computational specifications of complex engineered systems but applications to complex biological systems are now appearing. The inflammatory component of AD involves the responses of microglia to the peptide amyloid-β (Aβ), which is an inflammatory stimulus and a likely causative AD agent. Temporal-logic analysis of the model provides explanations for the puzzling findings that Aβ induces an anti-inflammatory and well as a pro-inflammatory response, and that Aβ is phagocytized by microglia in young but not in old animals. To potentially explain the first puzzle, the model suggests that interferon-γ acts as an "autocrine bridge" over which an Aβ-induced increase in pro-inflammatory cytokines leads to an increase in anti-inflammatory mediators also. To potentially explain the second puzzle, the model identifies a potential instability in signaling via insulin-like growth factor 1 that could explain the failure of old microglia to phagocytize Aβ. The model predicts that augmentation of insulin-like growth factor 1 signaling, and activation of protein kinase C in particular, could move old microglia from a neurotoxic back toward a more neuroprotective and phagocytic phenotype.
Prediction of Surface and pH-Specific Binding of Peptides to Metal and Oxide Nanoparticles
NASA Astrophysics Data System (ADS)
Heinz, Hendrik; Lin, Tzu-Jen; Emami, Fateme Sadat; Ramezani-Dakhel, Hadi; Naik, Rajesh; Knecht, Marc; Perry, Carole C.; Huang, Yu
2015-03-01
The mechanism of specific peptide adsorption onto metallic and oxidic nanostructures has been elucidated in atomic resolution using novel force fields and surface models in comparison to measurements. As an example, variations in peptide adsorption on Pd and Pt nanoparticles depending on shape, size, and location of peptides on specific bounding facets are explained. Accurate computational predictions of reaction rates in C-C coupling reactions using particle models derived from HE-XRD and PDF data illustrate the utility of computational methods for the rational design of new catalysts. On oxidic nanoparticles such as silica and apatites, it is revealed how changes in pH lead to similarity scores of attracted peptides lower than 20%, supported by appropriate model surfaces and data from adsorption isotherms. The results demonstrate how new computational methods can support the design of nanoparticle carriers for drug release and the understanding of calcification mechanisms in the human body.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
Random noise effects in pulse-mode digital multilayer neural networks.
Kim, Y C; Shanblatt, M A
1995-01-01
A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.
Computational optimization and biological evolution.
Goryanin, Igor
2010-10-01
Modelling and optimization principles become a key concept in many biological areas, especially in biochemistry. Definitions of objective function, fitness and co-evolution, although they differ between biology and mathematics, are similar in a general sense. Although successful in fitting models to experimental data, and some biochemical predictions, optimization and evolutionary computations should be developed further to make more accurate real-life predictions, and deal not only with one organism in isolation, but also with communities of symbiotic and competing organisms. One of the future goals will be to explain and predict evolution not only for organisms in shake flasks or fermenters, but for real competitive multispecies environments.
Adjudicating between face-coding models with individual-face fMRI responses
Kriegeskorte, Nikolaus
2017-01-01
The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging. PMID:28746335
Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus
2017-02-01
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.
Explaining evolution via constrained persistent perfect phylogeny
2014-01-01
Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to explain efficiently data that do not conform with the classical perfect phylogeny model. PMID:25572381
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
Included in the report are: (1) review of the erythropoietic mechanisms; (2) an evaluation of existing models for the control of erythropoiesis; (3) a computer simulation of the model's response to hypoxia; (4) an hypothesis to explain observed decreases in red blood cell mass during weightlessness; (5) suggestions for further research; and (6) an assessment of the role that systems analysis can play in the Skylab hematological program.
Indonesia’s Electricity Demand Dynamic Modelling
NASA Astrophysics Data System (ADS)
Sulistio, J.; Wirabhuana, A.; Wiratama, M. G.
2017-06-01
Electricity Systems modelling is one of the emerging area in the Global Energy policy studies recently. System Dynamics approach and Computer Simulation has become one the common methods used in energy systems planning and evaluation in many conditions. On the other hand, Indonesia experiencing several major issues in Electricity system such as fossil fuel domination, demand - supply imbalances, distribution inefficiency, and bio-devastation. This paper aims to explain the development of System Dynamics modelling approaches and computer simulation techniques in representing and predicting electricity demand in Indonesia. In addition, this paper also described the typical characteristics and relationship of commercial business sector, industrial sector, and family / domestic sector as electricity subsystems in Indonesia. Moreover, it will be also present direct structure, behavioural, and statistical test as model validation approach and ended by conclusions.
Computational modeling of mediator oxidation by oxygen in an amperometric glucose biosensor.
Simelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Razumienė, Julija
2014-02-07
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior.
Computational Modeling of Mediator Oxidation by Oxygen in an Amperometric Glucose Biosensor
Šimelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Julija, Razumienė
2014-01-01
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior. PMID:24514882
Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.
2017-06-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.
Computer Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less
NASA Astrophysics Data System (ADS)
Cao, Zhenwei
Over the years, people have found Quantum Mechanics to be extremely useful in explaining various physical phenomena from a microscopic point of view. Anderson localization, named after physicist P. W. Anderson, states that disorder in a crystal can cause non-spreading of wave packets, which is one possible mechanism (at single electron level) to explain metal-insulator transitions. The theory of quantum computation promises to bring greater computational power over classical computers by making use of some special features of Quantum Mechanics. The first part of this dissertation considers a 3D alloy-type model, where the Hamiltonian is the sum of the finite difference Laplacian corresponding to free motion of an electron and a random potential generated by a sign-indefinite single-site potential. The result shows that localization occurs in the weak disorder regime, i.e., when the coupling parameter lambda is very small, for energies E ≤ --Clambda 2. The second part of this dissertation considers adiabatic quantum computing (AQC) algorithms for the unstructured search problem to the case when the number of marked items is unknown. In an ideal situation, an explicit quantum algorithm together with a counting subroutine are given that achieve the optimal Grover speedup over classical algorithms, i.e., roughly speaking, reduce O(2n) to O(2n/2), where n is the size of the problem. However, if one considers more realistic settings, the result shows this quantum speedup is achievable only under a very rigid control precision requirement (e.g., exponentially small control error).
Matched Index of Refraction Flow Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcllroy, Hugh
What's 27 feet long, 10 feet tall and full of mineral oil (3000 gallons' worth)? If you said INL's Matched Index of Refraction facility, give yourself a gold star. Scientists use computers to model the inner workings of nuclear reactors, and MIR helps validate those models. INL's Hugh McIlroy explains in this video. You can learn more about INL energy research at the lab's facebook site http://www.facebook.com/idahonationallaboratory.
Matched Index of Refraction Flow Facility
Mcllroy, Hugh
2018-01-08
What's 27 feet long, 10 feet tall and full of mineral oil (3000 gallons' worth)? If you said INL's Matched Index of Refraction facility, give yourself a gold star. Scientists use computers to model the inner workings of nuclear reactors, and MIR helps validate those models. INL's Hugh McIlroy explains in this video. You can learn more about INL energy research at the lab's facebook site http://www.facebook.com/idahonationallaboratory.
Multiscale Models of Melting Arctic Sea Ice
2014-09-30
from weakly to highly correlated, or Poissonian toward Wigner -Dyson, as a function of system connectedness. This provides a mechanism for explaining...eluded us. Court Strong found such a method. It creates an optimal fit of a hyperbolic tangent model for the fractal dimension as a function of log A...actual melt pond images, and have made significant advances in the underlying functional and numerical analysis needed for these computations
Quantum Engineering of Dynamical Gauge Fields on Optical Lattices
2016-07-08
opens the door for exciting new research directions, such as quantum simulation of the Schwinger model and of non-Abelian models. (a) Papers...exact blocking formulas from the TRG formulation of the transfer matrix. The second is a worm algorithm. The particle number distributions obtained...a fact that can be explained by an approximate particle- hole symmetry. We have also developed a computer code suite for simulating the Abelian
Natural and accelerated recovery from brain damage: experimental and theoretical approaches.
Andersen, Richard A; Schieber, Marc H; Thakor, Nitish; Loeb, Gerald E
2012-03-01
The goal of the Caltech group is to gain insight into the processes that occur within the primate nervous system during dexterous reaching and grasping and to see whether natural recovery from local brain damage can be accelerated by artificial means. We will create computational models of the nervous system embodying this insight and explain a variety of clinically observed neurological deficits in human subjects using these models.
A 3D visualization and simulation of the individual human jaw.
Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo
2003-01-01
A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.
Charles Bonnet Syndrome: Evidence for a Generative Model in the Cortex?
Reichert, David P.; Seriès, Peggy; Storkey, Amos J.
2013-01-01
Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain. PMID:23874177
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Fuchs, Marcus; Nouidui, Thierry
This paper discusses design decisions for exporting Modelica thermofluid flow components as Functional Mockup Units. The purpose is to provide guidelines that will allow building energy simulation programs and HVAC equipment manufacturers to effectively use FMUs for modeling of HVAC components and systems. We provide an analysis for direct input-output dependencies of such components and discuss how these dependencies can lead to algebraic loops that are formed when connecting thermofluid flow components. Based on this analysis, we provide recommendations that increase the computing efficiency of such components and systems that are formed by connecting multiple components. We explain what codemore » optimizations are lost when providing thermofluid flow components as FMUs rather than Modelica code. We present an implementation of a package for FMU export of such components, explain the rationale for selecting the connector variables of the FMUs and finally provide computing benchmarks for different design choices. It turns out that selecting temperature rather than specific enthalpy as input and output signals does not lead to a measurable increase in computing time, but selecting nine small FMUs rather than a large FMU increases computing time by 70%.« less
Modulation of the error-related negativity by response conflict.
Danielmeier, Claudia; Wessel, Jan R; Steinhauser, Marco; Ullsperger, Markus
2009-11-01
An arrow version of the Eriksen flanker task was employed to investigate the influence of conflict on the error-related negativity (ERN). The degree of conflict was modulated by varying the distance between flankers and the target arrow (CLOSE and FAR conditions). Error rates and reaction time data from a behavioral experiment were used to adapt a connectionist model of this task. This model was based on the conflict monitoring theory and simulated behavioral and event-related potential data. The computational model predicted an increased ERN amplitude in FAR incompatible (the low-conflict condition) compared to CLOSE incompatible errors (the high-conflict condition). A subsequent ERP experiment confirmed the model predictions. The computational model explains this finding with larger post-response conflict in far trials. In addition, data and model predictions of the N2 and the LRP support the conflict interpretation of the ERN.
Numerical modeling of the thin shallow solar dynamo
NASA Astrophysics Data System (ADS)
O'Bryan, J. B.; Jarboe, T. R.
2017-10-01
Nonlinear, numerical computation with the NIMROD code is used to explore and validate the thin shallow solar dynamo model [T.R. Jarboe et al. 2017], which explains the observed global temporal evolution (e.g. magnetic field reversal) and local surface structures (e.g. sunspots) of the sun. The key feature of this model is the presence and magnetic self-organization of global magnetic structures (GMS) lying just below the surface of the sun, which resemble 1D radial Taylor states of size comparable to the supergranule convection cells. First, we seek to validate the thin shallow solar dynamo model by reproducing the 11 year timescale for reversal of the solar magnetic field. Then, we seek to model formation of GMS from convection zone turbulence. Our computations simulate a slab covering a radial depth 3Mm and include differential rotation and gravity. Density, temperature, and resistivity profiles are taken from the Christensen-Dalsgaard model.
Venkataratamani, Prasanna Venkhatesh; Murthy, Aditya
2018-05-16
Previous studies have investigated the computational architecture underlying the voluntary control of reach movements that demands a change in position or direction of movement planning. Here we used a novel task, where subjects either had to increase or decrease the movement speed according to a change in target color that occurred randomly during a trial. The applicability of different race models to such a speed redirect task was assessed. We found that the predictions of an independent race model that instantiated an abort and re-plan strategy was consistent with all aspects of performance in the fast to slow speed condition. The results from modeling indicated a peculiar asymmetry, in that while the fast to slow speed change required inhibition, none of the standard race models were able to explain how movements changed from slow to fast speeds. Interestingly, a weighted averaging model that simulated the gradual merge of two kinematic plans explained behavior in the slow to fast speed task. In summary, our work shows how a race model framework can provide an understanding of how the brain controls of different aspects of reach movement planning and help distinguish between an abort and re-plan strategy from merging of plans.
Wolff, Phillip; Barbey, Aron K.
2015-01-01
Causal composition allows people to generate new causal relations by combining existing causal knowledge. We introduce a new computational model of such reasoning, the force theory, which holds that people compose causal relations by simulating the processes that join forces in the world, and compare this theory with the mental model theory (Khemlani et al., 2014) and the causal model theory (Sloman et al., 2009), which explain causal composition on the basis of mental models and structural equations, respectively. In one experiment, the force theory was uniquely able to account for people's ability to compose causal relationships from complex animations of real-world events. In three additional experiments, the force theory did as well as or better than the other two theories in explaining the causal compositions people generated from linguistically presented causal relations. Implications for causal learning and the hierarchical structure of causal knowledge are discussed. PMID:25653611
Bayesian Latent Class Analysis Tutorial.
Li, Yuelin; Lord-Bessen, Jennifer; Shiyko, Mariya; Loeb, Rebecca
2018-01-01
This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.
Computational models of epilepsy.
Stefanescu, Roxana A; Shivakeshavan, R G; Talathi, Sachin S
2012-12-01
Approximately 30% of epilepsy patients suffer from medically refractory epilepsy, in which seizures can not controlled by the use of anti-epileptic drugs (AEDs). Understanding the mechanisms underlying these forms of drug-resistant epileptic seizures and the development of alternative effective treatment strategies are fundamental challenges for modern epilepsy research. In this context, computational modeling has gained prominence as an important tool for tackling the complexity of the epileptic phenomenon. In this review article, we present a survey of computational models of epilepsy from the point of view that epilepsy is a dynamical brain disease that is primarily characterized by unprovoked spontaneous epileptic seizures. We introduce key concepts from the mathematical theory of dynamical systems, such as multi-stability and bifurcations, and explain how these concepts aid in our understanding of the brain mechanisms involved in the emergence of epileptic seizures. We present a literature survey of the different computational modeling approaches that are used in the study of epilepsy. Special emphasis is placed on highlighting the fine balance between the degree of model simplification and the extent of biological realism that modelers seek in order to address relevant questions. In this context, we discuss three specific examples from published literature, which exemplify different approaches used for developing computational models of epilepsy. We further explore the potential of recently developed optogenetics tools to provide novel avenue for seizure control. We conclude with a discussion on the utility of computational models for the development of new epilepsy treatment protocols. Copyright © 2012 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
FOCUS: a fire management planning system -- final report
Frederick W. Bratten; James B. Davis; George T. Flatman; Jerold W. Keith; Stanley R. Rapp; Theodore G. Storey
1981-01-01
FOCUS (Fire Operational Characteristics Using Simulation) is a computer simulation model for evaluating alternative fire management plans. This final report provides a broad overview of the FOCUS system, describes two major modules-fire suppression and cost, explains the role in the system of gaming large fires, and outlines the support programs and ways of...
Astroblaster--A Fascinating Game of Multi-Ball Collisions
ERIC Educational Resources Information Center
Kires, Marian
2009-01-01
Multi-ball collisions inside the Astroblaster toy are explained from the conservation of momentum point of view. The important role of the coefficient of restitution is demonstrated in ideal and real cases. Real experimental results with the simple toy can be compared with a computer model represented by an interactive Java applet. (Contains 1…
ERIC Educational Resources Information Center
Knobel, Mark; Caramazza, Alfonso
2007-01-01
Caramazza et al. [Caramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing of consonants and vowels. "Nature," 403(6768), 428-430.] report two patients who exhibit a double dissociation between consonants and vowels in speech production. The patterning of this double dissociation cannot be explained by appealing to…
Growth Dynamics of Information Search Services.
ERIC Educational Resources Information Center
Lindqvist, Mats
Computer based information search services, ISS's, of the type that provide on-line literature searches are analyzed from a system's viewpoint using a continuous simulation model. The analysis shows that the observed growth and stagnation of a typical ISS can be explained as a natural consequence of market responses to the service together with a…
Statistical Model for Predicting Roles and Effects in Learning Community
ERIC Educational Resources Information Center
Chang, Chih-Kai; Chen, Gwo-Dong; Wang, Chin-Yeh
2011-01-01
Functional roles may explain the learning performance of groups. Detecting a functional role is critical for promoting group learning performance in computer-supported collaborative learning environments. However, it is not easy for teachers to identify the functional roles played by students in a web-based learning group, or the relationship…
Bioinformatics, or in silico biology, is a rapidly growing field that encompasses the theory and application of computational approaches to model, predict, and explain biological function at the molecular level. This information rich field requires new ...
Why Computational Models Are Better than Verbal Theories: The Case of Nonword Repetition
ERIC Educational Resources Information Center
Jones, Gary; Gobet, Fernand; Freudenthal, Daniel; Watson, Sarah E.; Pine, Julian M.
2014-01-01
Tests of nonword repetition (NWR) have often been used to examine children's phonological knowledge and word learning abilities. However, theories of NWR primarily explain performance either in terms of phonological working memory or long-term knowledge, with little consideration of how these processes interact. One theoretical account that…
From Blickets to Synapses: Inferring Temporal Causal Networks by Observation
ERIC Educational Resources Information Center
Fernando, Chrisantha
2013-01-01
How do human infants learn the causal dependencies between events? Evidence suggests that this remarkable feat can be achieved by observation of only a handful of examples. Many computational models have been produced to explain how infants perform causal inference without explicit teaching about statistics or the scientific method. Here, we…
A Simulation of AI Programming Techniques in BASIC.
ERIC Educational Resources Information Center
Mandell, Alan
1986-01-01
Explains the functions of and the techniques employed in expert systems. Offers the program "The Periodic Table Expert," as a model for using artificial intelligence techniques in BASIC. Includes the program listing and directions for its use on: Tandy 1000, 1200, and 2000; IBM PC; PC Jr; TRS-80; and Apple computers. (ML)
Consolidation of Long-Term Memory: Evidence and Alternatives
ERIC Educational Resources Information Center
Meeter, Martijn; Murre, Jaap M. J.
2004-01-01
Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how consolidation can explain characteristics of…
Computational Motion Phantoms and Statistical Models of Respiratory Motion
NASA Astrophysics Data System (ADS)
Ehrhardt, Jan; Klinder, Tobias; Lorenz, Cristian
Breathing motion is not a robust and 100 % reproducible process, and inter- and intra-fractional motion variations form an important problem in radiotherapy of the thorax and upper abdomen. A widespread consensus nowadays exists that it would be useful to use prior knowledge about respiratory organ motion and its variability to improve radiotherapy planning and treatment delivery. This chapter discusses two different approaches to model the variability of respiratory motion. In the first part, we review computational motion phantoms, i.e. computerized anatomical and physiological models. Computational phantoms are excellent tools to simulate and investigate the effects of organ motion in radiation therapy and to gain insight into methods for motion management. The second part of this chapter discusses statistical modeling techniques to describe the breathing motion and its variability in a population of 4D images. Population-based models can be generated from repeatedly acquired 4D images of the same patient (intra-patient models) and from 4D images of different patients (inter-patient models). The generation of those models is explained and possible applications of those models for motion prediction in radiotherapy are exemplified. Computational models of respiratory motion and motion variability have numerous applications in radiation therapy, e.g. to understand motion effects in simulation studies, to develop and evaluate treatment strategies or to introduce prior knowledge into the patient-specific treatment planning.
Trusted measurement model based on multitenant behaviors.
Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng
2014-01-01
With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme.
Trusted Measurement Model Based on Multitenant Behaviors
Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng
2014-01-01
With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme. PMID:24987731
Erfanian Saeedi, Nafise; Blamey, Peter J; Burkitt, Anthony N; Grayden, David B
2016-04-01
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.
Erfanian Saeedi, Nafise; Blamey, Peter J.; Burkitt, Anthony N.; Grayden, David B.
2016-01-01
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy. PMID:27049657
1986-12-31
synthesize synchronization skeletons"Science of Computer Programming 2, 1982, pp. 241-266 [Gel85] Gelernter, David, "Generative communication in...effective computation based on given primitives . An architecture is an abstract object-type, whose instances are computing systems. By a parallel computing...explaining the language primitives on this basis. We explain how such a basis can be "simpler" than a general-purpose manual-programming language such as
Gravitational Acceleration Effects on Macrosegregation: Experiment and Computational Modeling
NASA Technical Reports Server (NTRS)
Leon-Torres, J.; Curreri, P. A.; Stefanescu, D. M.; Sen, S.
1999-01-01
Experiments were performed under terrestrial gravity (1g) and during parabolic flights (10-2 g) to study the solidification and macrosegregation patterns of Al-Cu alloys. Alloys having 2% and 5% Cu were solidified against a chill at two different cooling rates. Microscopic and Electron Microprobe characterization was used to produce microstructural and macrosegregation maps. In all cases positive segregation occurred next to the chill because shrinkage flow, as expected. This positive segregation was higher in the low-g samples, apparently because of the higher heat transfer coefficient. A 2-D computational model was used to explain the experimental results. The continuum formulation was employed to describe the macroscopic transports of mass, energy, and momentum, associated with the solidification phenomena, for a two-phase system. The model considers that liquid flow is driven by thermal and solutal buoyancy, and by solidification shrinkage. The solidification event was divided into two stages. In the first one, the liquid containing freely moving equiaxed grains was described through the relative viscosity concept. In the second stage, when a fixed dendritic network was formed after dendritic coherency, the mushy zone was treated as a porous medium. The macrosegregation maps and the cooling curves obtained during experiments were used for validation of the solidification and segregation model. The model can explain the solidification and macrosegregation patterns and the differences between low- and high-gravity results.
Karimi, Davood; Ward, Rabab K
2016-10-01
Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.
Probabilistic Learning by Rodent Grid Cells
Cheung, Allen
2016-01-01
Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population readout of a set of probabilistic spatial computations. PMID:27792723
Gromov-Witten invariants and localization
NASA Astrophysics Data System (ADS)
Morrison, David R.
2017-11-01
We give a pedagogical review of the computation of Gromov-Witten invariants via localization in 2D gauged linear sigma models. We explain the relationship between the two-sphere partition function of the theory and the Kähler potential on the conformal manifold. We show how the Kähler potential can be assembled from classical, perturbative, and non-perturbative contributions, and explain how the non-perturbative contributions are related to the Gromov-Witten invariants of the corresponding Calabi-Yau manifold. We then explain how localization enables efficient calculation of the two-sphere partition function and, ultimately, the Gromov-Witten invariants themselves. This is a contribution to the review issue ‘Localization techniques in quantum field theories’ (ed V Pestun and M Zabzine) which contains 17 chapters, available at [1].
AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics
NASA Astrophysics Data System (ADS)
Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.
2017-05-01
We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.
Forest management and economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buongiorno, J.; Gilless, J.K.
1987-01-01
This volume provides a survey of quantitative methods, guiding the reader through formulation and analysis of models that address forest management problems. The authors use simple mathematics, graphics, and short computer programs to explain each method. Emphasizing applications, they discuss linear, integer, dynamic, and goal programming; simulation; network modeling; and econometrics, as these relate to problems of determining economic harvest schedules in even-aged and uneven-aged forests, the evaluation of forest policies, multiple-objective decision making, and more.
Sonar Performance Estimation Model with Seismo-Acoustic Effects on Underwater Sound Propagation
1989-06-27
properties of 12 the bottom sediments. The ray theory is highly satisfactory to predict and explain some electromagnetic phenomena, and it is very useful in...erroneous transmission loss computations where acoustic interference occurs. However, his transmission loss calculations are made using ray theory which is...developed which treat some of these properties. Each model has its virtues and limitations. For high- frequency sound propagation the ray theory can
The radiation environment of OSO missions from 1974 to 1978
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1973-01-01
Trapped particle radiation levels on several OSO missions were calculated for nominal trajectories using improved computational methods and new electron environment models. Temporal variations of the electron fluxes were considered and partially accounted for. Magnetic field calculations were performed with a current field model and extrapolated to a later epoch with linear time terms. Orbital flux integration results, which are presented in graphical and tabular form, are analyzed, explained, and discussed.
From Bethe–Salpeter Wave functions to Generalised Parton Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2016-06-06
We review recent works on the modelling of Generalised Parton Distributions within the Dyson-Schwinger formalism. We highlight how covariant computations, using the impulse approximation, allows one to fulfil most of the theoretical constraints of the GPDs. A specific attention is brought to chiral properties and especially the so-called soft pion theorem, and its link with the Axial-Vector Ward-Takahashi identity. The limitation of the impulse approximation are also explained. Beyond impulse approximation computations are reviewed in the forward case. Finally, we stress the advantages of the overlap of lightcone wave functions, and possible ways to construct covariant GPD models within thismore » framework, in a two-body approximation« less
Computer studies of baroclinic flow. [Atmospheric General Circulation Experiment
NASA Technical Reports Server (NTRS)
Gall, R.
1985-01-01
Programs necessary for computing the transition curve on the regime diagram for the atmospheric general circulation experiment (AGOE) were completed and used to determine the regime diagram for the rotating annulus and some axisymmetric flows for one possible AGOE configuration. The effect of geometrical constraints on the size of eddies developing from a basic state is being examined. In AGOE, the geometric constraint should be the width of the shear zone or the baroclinic zone. Linear and nonlinear models are to be used to examine both barotropic and baroclinic flows. The results should help explain the scale selection mechanism of baroclinic eddies in the atmosphere experimental models such as AGOE, and the multiple vortex phenomenon in tornadoes.
Monitoring and decision making by people in man machine systems
NASA Technical Reports Server (NTRS)
Johannsen, G.
1979-01-01
The analysis of human monitoring and decision making behavior as well as its modeling are described. Classic and optimal control theoretical, monitoring models are surveyed. The relationship between attention allocation and eye movements is discussed. As an example of applications, the evaluation of predictor displays by means of the optimal control model is explained. Fault detection involving continuous signals and decision making behavior of a human operator engaged in fault diagnosis during different operation and maintenance situations are illustrated. Computer aided decision making is considered as a queueing problem. It is shown to what extent computer aids can be based on the state of human activity as measured by psychophysiological quantities. Finally, management information systems for different application areas are mentioned. The possibilities of mathematical modeling of human behavior in complex man machine systems are also critically assessed.
Oku, Yoshitaka; Hülsmann, Swen
2017-04-07
The topology of the respiratory network in the brainstem has been addressed using different computational models, which help to understand the functional properties of the system. We tested a neural mass model by comparing the result of activation and inhibition of inhibitory neurons in silico with recently published results of optogenetic manipulation of glycinergic neurons [Sherman, et al. (2015) Nat Neurosci 18:408]. The comparison revealed that a five-cell type model consisting of three classes of inhibitory neurons [I-DEC, E-AUG, E-DEC (PI)] and two excitatory populations (pre-I/I) and (I-AUG) neurons can be applied to explain experimental observations made by stimulating or inhibiting inhibitory neurons by light sensitive ion channels. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Glaese, John R.; Tobbe, Patrick A.
1986-01-01
The Space Station Mechanism Test Bed consists of a hydraulically driven, computer controlled six degree of freedom (DOF) motion system with which docking, berthing, and other mechanisms can be evaluated. Measured contact forces and moments are provided to the simulation host computer to enable representation of orbital contact dynamics. This report describes the development of a generalized math model which represents the relative motion between two rigid orbiting vehicles. The model allows motion in six DOF for each body, with no vehicle size limitation. The rotational and translational equations of motion are derived. The method used to transform the forces and moments from the sensor location to the vehicles' centers of mass is also explained. Two math models of docking mechanisms, a simple translational spring and the Remote Manipulator System end effector, are presented along with simulation results. The translational spring model is used in an attempt to verify the simulation with compensated hardware in the loop results.
Supèr, Hans; Romeo, August
2012-01-01
A visual stimulus can be made invisible, i.e. masked, by the presentation of a second stimulus. In the sensory cortex, neural responses to a masked stimulus are suppressed, yet how this suppression comes about is still debated. Inhibitory models explain masking by asserting that the mask exerts an inhibitory influence on the responses of a neuron evoked by the target. However, other models argue that the masking interferes with recurrent or reentrant processing. Using computer modeling, we show that surround inhibition evoked by ON and OFF responses to the mask suppresses the responses to a briefly presented stimulus in forward and backward masking paradigms. Our model results resemble several previously described psychophysical and neurophysiological findings in perceptual masking experiments and are in line with earlier theoretical descriptions of masking. We suggest that precise spatiotemporal influence of surround inhibition is relevant for visual detection. PMID:22393370
Myers, C E; Gluck, M A
1996-08-01
A previous model of hippocampal region function in classical conditioning is generalized to H. Eichenbaum, A. Fagan, P. Mathews, and N.J. Cohen's (1989) and H. Eichenbaum, A. Fagan, and N.J. Cohen's (1989) simultaneous odor discrimination studies in rats. The model assumes that the hippocampal region forms new stimulus representations that compress redundant information while differentiating predictie information; the piriform (olfactory) cortex meanwhile clusters similar and co-occurring odors. Hippocampal damage interrupts the ability to differentiate odor representations, while leaving piriform-mediated odor clustering unchecked. The result is a net tendency to overcompress in the lesioned model. Behavior in the model is very similar to that of the rats, including lesion deficits, facilitation of successively learned tasks, and transfer performance. The computational mechanisms underlying model performance are consistent with the qualitative interpretations suggested by Eichen baum et al. to explain their empirical data.
A reinterpretation of transparency perception in terms of gamut relativity.
Vladusich, Tony
2013-03-01
Classical approaches to transparency perception assume that transparency constitutes a perceptual dimension corresponding to the physical dimension of transmittance. Here I present an alternative theory, termed gamut relativity, that naturally explains key aspects of transparency perception. Rather than being computed as values along a perceptual dimension corresponding to transmittance, gamut relativity postulates that transparency is built directly into the fabric of the visual system's representation of surface color. The theory, originally developed to explain properties of brightness and lightness perception, proposes how the relativity of the achromatic color gamut in a perceptual blackness-whiteness space underlies the representation of foreground and background surface layers. Whereas brightness and lightness perception were previously reanalyzed in terms of the relativity of the achromatic color gamut with respect to illumination level, transparency perception is here reinterpreted in terms of relativity with respect to physical transmittance. The relativity of the achromatic color gamut thus emerges as a fundamental computational principle underlying surface perception. A duality theorem relates the definition of transparency provided in gamut relativity with the classical definition underlying the physical blending models of computer graphics.
A new 3D maser code applied to flaring events
NASA Astrophysics Data System (ADS)
Gray, M. D.; Mason, L.; Etoka, S.
2018-06-01
We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Y.; Dutta, P.; Schupp, P.E.
1995-12-31
Observations of surface flow patterns of steel and aluminum GTAW pools have been made using a pulsed laser visualization system. The weld pool convection is found to be three dimensional, with the azimuthal circulation depending on the location of the clamp with respect to the torch. Oscillation of steel pools and undulating motion in aluminum weld pools are also observed even with steady process parameters. Current axisymmetric numerical models are unable to explain such phenomena. A three dimensional computational study is carried out in this study to explain the rotational flow in aluminum weld pools.
Imamizu, Hiroshi; Kuroda, Tomoe; Yoshioka, Toshinori; Kawato, Mitsuo
2004-02-04
An internal model is a neural mechanism that can mimic the input-output properties of a controlled object such as a tool. Recent research interests have moved on to how multiple internal models are learned and switched under a given context of behavior. Two representative computational models for task switching propose distinct neural mechanisms, thus predicting different brain activity patterns in the switching of internal models. In one model, called the mixture-of-experts architecture, switching is commanded by a single executive called a "gating network," which is different from the internal models. In the other model, called the MOSAIC (MOdular Selection And Identification for Control), the internal models themselves play crucial roles in switching. Consequently, the mixture-of-experts model predicts that neural activities related to switching and internal models can be temporally and spatially segregated, whereas the MOSAIC model predicts that they are closely intermingled. Here, we directly examined the two predictions by analyzing functional magnetic resonance imaging activities during the switching of one common tool (an ordinary computer mouse) and two novel tools: a rotated mouse, the cursor of which appears in a rotated position, and a velocity mouse, the cursor velocity of which is proportional to the mouse position. The switching and internal model activities temporally and spatially overlapped each other in the cerebellum and in the parietal cortex, whereas the overlap was very small in the frontal cortex. These results suggest that switching mechanisms in the frontal cortex can be explained by the mixture-of-experts architecture, whereas those in the cerebellum and the parietal cortex are explained by the MOSAIC model.
Developing Open Source Software To Advance High End Computing. Report to the President.
ERIC Educational Resources Information Center
National Coordination Office for Information Technology Research and Development, Arlington, VA.
This is part of a series of reports to the President and Congress developed by the President's Information Technology Advisory Committee (PITAC) on key contemporary issues in information technology. This report defines open source software, explains PITAC's interest in this model, describes the process used to investigate issues in open source…
A data collection and processing procedure for evaluating a research program
Giuseppe Rensi; H. Dean Claxton
1972-01-01
A set of computer programs compiled for the information processing requirements of a model for evaluating research proposals are described. The programs serve to assemble and store information, periodically update it, and convert it to a form usable for decision-making. Guides for collecting and coding data are explained. The data-processing options available and...
Donald E. Zimmerman; Carol Akerelrea; Jane Kapler Smith; Garrett J. O' Keefe
2006-01-01
Natural-resource managers have used a variety of computer-mediated presentation methods to communicate management practices to diverse publics. We explored the effects of visualizing and animating predictions from mathematical models in computerized presentations explaining forest succession (forest growth and change through time), fire behavior, and management options...
Beginning School Math Competence: Minority and Majority Comparisons. Report No. 34.
ERIC Educational Resources Information Center
Entwisle, Doris R.; Alexander, Karl L.
This paper uses a structural model with a large random sample of urban children to explain children's competence in math concepts and computation at the time they begin first grade. These two aspects of math ability respond differently to environmental resources, with math concepts much more responsive to family factors before formal schooling…
Challenging Density Functional Theory Calculations with Hemes and Porphyrins
de Visser, Sam P.; Stillman, Martin J.
2016-01-01
In this paper we review recent advances in computational chemistry and specifically focus on the chemical description of heme proteins and synthetic porphyrins that act as both mimics of natural processes and technological uses. These are challenging biochemical systems involved in electron transfer as well as biocatalysis processes. In recent years computational tools have improved considerably and now can reproduce experimental spectroscopic and reactivity studies within a reasonable error margin (several kcal·mol−1). This paper gives recent examples from our groups, where we investigated heme and synthetic metal-porphyrin systems. The four case studies highlight how computational modelling can correctly reproduce experimental product distributions, predicted reactivity trends and guide interpretation of electronic structures of complex systems. The case studies focus on the calculations of a variety of spectroscopic features of porphyrins and show how computational modelling gives important insight that explains the experimental spectra and can lead to the design of porphyrins with tuned properties. PMID:27070578
A stellar audit: the computation of encounter rates for 47 Tucanae and omega Centauri
NASA Astrophysics Data System (ADS)
Davies, Melvyn B.; Benz, Willy
1995-10-01
Using King-Mitchie models, we compute encounter rates between the various stellar species in the globular clusters omega Cen and 47 Tuc. We also compute event rates for encounters between single stars and a population of primordial binaries. Using these rates, and what we have learnt from hydrodynamical simulations of encounters performed earlier, we compute the production rates of objects such as low-mass X-ray binaries (LMXBs), smothered neutron stars and blue stragglers (massive main-sequence stars). If 10 per cent of the stars are contained in primordial binaries, the production rate of interesting objects from encounters involving these binaries is as large as that from encounters between single stars. For example, encounters involving binaries produce a significant number of blue stragglers in both globular cluster models. The number of smothered neutron stars may exceed the number of LMXBs by a factor of 5-20, which may help to explain why millisecond pulsars are observed to outnumber LMXBs in globular clusters.
Agent-based modeling: case study in cleavage furrow models
Mogilner, Alex; Manhart, Angelika
2016-01-01
The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328
Computational neurobiology is a useful tool in translational neurology: the example of ataxia
Brown, Sherry-Ann; McCullough, Louise D.; Loew, Leslie M.
2014-01-01
Hereditary ataxia, or motor incoordination, affects approximately 150,000 Americans and hundreds of thousands of individuals worldwide with onset from as early as mid-childhood. Affected individuals exhibit dysarthria, dysmetria, action tremor, and diadochokinesia. In this review, we consider an array of computational studies derived from experimental observations relevant to human neuropathology. A survey of related studies illustrates the impact of integrating clinical evidence with data from mouse models and computational simulations. Results from these studies may help explain findings in mice, and after extensive laboratory study, may ultimately be translated to ataxic individuals. This inquiry lays a foundation for using computation to understand neurobiochemical and electrophysiological pathophysiology of spinocerebellar ataxias and may contribute to development of therapeutics. The interdisciplinary analysis suggests that computational neurobiology can be an important tool for translational neurology. PMID:25653585
Rudd, Michael E.
2014-01-01
Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4. PMID:25202253
Rudd, Michael E
2014-01-01
Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.
Chuderski, Adam; Andrelczyk, Krzysztof
2015-02-01
Several existing computational models of working memory (WM) have predicted a positive relationship (later confirmed empirically) between WM capacity and the individual ratio of theta to gamma oscillatory band lengths. These models assume that each gamma cycle represents one WM object (e.g., a binding of its features), whereas the theta cycle integrates such objects into the maintained list. As WM capacity strongly predicts reasoning, it might be expected that this ratio also predicts performance in reasoning tasks. However, no computational model has yet explained how the differences in the theta-to-gamma ratio found among adult individuals might contribute to their scores on a reasoning test. Here, we propose a novel model of how WM capacity constraints figural analogical reasoning, aimed at explaining inter-individual differences in reasoning scores in terms of the characteristics of oscillatory patterns in the brain. In the model, the gamma cycle encodes the bindings between objects/features and the roles they play in the relations processed. Asynchrony between consecutive gamma cycles results from lateral inhibition between oscillating bindings. Computer simulations showed that achieving the highest WM capacity required reaching the optimal level of inhibition. When too strong, this inhibition eliminated some bindings from WM, whereas, when inhibition was too weak, the bindings became unstable and fell apart or became improperly grouped. The model aptly replicated several empirical effects and the distribution of individual scores, as well as the patterns of correlations found in the 100-people sample attempting the same reasoning task. Most importantly, the model's reasoning performance strongly depended on its theta-to-gamma ratio in same way as the performance of human participants depended on their WM capacity. The data suggest that proper regulation of oscillations in the theta and gamma bands may be crucial for both high WM capacity and effective complex cognition. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Gokhale, Tanmay A; Kim, Jong M; Kirkton, Robert D; Bursac, Nenad; Henriquez, Craig S
2017-01-01
To understand how excitable tissues give rise to arrhythmias, it is crucially necessary to understand the electrical dynamics of cells in the context of their environment. Multicellular monolayer cultures have proven useful for investigating arrhythmias and other conduction anomalies, and because of their relatively simple structure, these constructs lend themselves to paired computational studies that often help elucidate mechanisms of the observed behavior. However, tissue cultures of cardiomyocyte monolayers currently require the use of neonatal cells with ionic properties that change rapidly during development and have thus been poorly characterized and modeled to date. Recently, Kirkton and Bursac demonstrated the ability to create biosynthetic excitable tissues from genetically engineered and immortalized HEK293 cells with well-characterized electrical properties and the ability to propagate action potentials. In this study, we developed and validated a computational model of these excitable HEK293 cells (called "Ex293" cells) using existing electrophysiological data and a genetic search algorithm. In order to reproduce not only the mean but also the variability of experimental observations, we examined what sources of variation were required in the computational model. Random cell-to-cell and inter-monolayer variation in both ionic conductances and tissue conductivity was necessary to explain the experimentally observed variability in action potential shape and macroscopic conduction, and the spatial organization of cell-to-cell conductance variation was found to not impact macroscopic behavior; the resulting model accurately reproduces both normal and drug-modified conduction behavior. The development of a computational Ex293 cell and tissue model provides a novel framework to perform paired computational-experimental studies to study normal and abnormal conduction in multidimensional excitable tissue, and the methodology of modeling variation can be applied to models of any excitable cell.
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
Acoustic and Perceptual Effects of Left–Right Laryngeal Asymmetries Based on Computational Modeling
Samlan, Robin A.; Story, Brad H.; Lotto, Andrew J.; Bunton, Kate
2015-01-01
Purpose Computational modeling was used to examine the consequences of 5 different laryngeal asymmetries on acoustic and perceptual measures of vocal function. Method A kinematic vocal fold model was used to impose 5 laryngeal asymmetries: adduction, edge bulging, nodal point ratio, amplitude of vibration, and starting phase. Thirty /a/ and /I/ vowels were generated for each asymmetry and analyzed acoustically using cepstral peak prominence (CPP), harmonics-to-noise ratio (HNR), and 3 measures of spectral slope (H1*-H2*, B0-B1, and B0-B2). Twenty listeners rated voice quality for a subset of the productions. Results Increasingly asymmetric adduction, bulging, and nodal point ratio explained significant variance in perceptual rating (R2 = .05, p < .001). The same factors resulted in generally decreasing CPP, HNR, and B0-B2 and in increasing B0-B1. Of the acoustic measures, only CPP explained significant variance in perceived quality (R2 = .14, p < .001). Increasingly asymmetric amplitude of vibration or starting phase minimally altered vocal function or voice quality. Conclusion Asymmetries of adduction, bulging, and nodal point ratio drove acoustic measures and perception in the current study, whereas asymmetric amplitude of vibration and starting phase demonstrated minimal influence on the acoustic signal or voice quality. PMID:24845730
Using computer agents to explain medical documents to patients with low health literacy.
Bickmore, Timothy W; Pfeifer, Laura M; Paasche-Orlow, Michael K
2009-06-01
Patients are commonly presented with complex documents that they have difficulty understanding. The objective of this study was to design and evaluate an animated computer agent to explain research consent forms to potential research participants. Subjects were invited to participate in a simulated consent process for a study involving a genetic repository. Explanation of the research consent form by the computer agent was compared to explanation by a human and a self-study condition in a randomized trial. Responses were compared according to level of health literacy. Participants were most satisfied with the consent process and most likely to sign the consent form when it was explained by the computer agent, regardless of health literacy level. Participants with adequate health literacy demonstrated the highest level of comprehension with the computer agent-based explanation compared to the other two conditions. However, participants with limited health literacy showed poor comprehension levels in all three conditions. Participants with limited health literacy reported several reasons, such as lack of time constraints, ability to re-ask questions, and lack of bias, for preferring the computer agent-based explanation over a human-based one. Animated computer agents can perform as well as or better than humans in the administration of informed consent. Animated computer agents represent a viable method for explaining health documents to patients.
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
Hierarchical competitions subserving multi-attribute choice
Hunt, Laurence T; Dolan, Raymond J; Behrens, Timothy EJ
2015-01-01
Valuation is a key tenet of decision neuroscience, where it is generally assumed that different attributes of competing options are assimilated into unitary values. Such values are central to current neural models of choice. By contrast, psychological studies emphasize complex interactions between choice and valuation. Principles of neuronal selection also suggest competitive inhibition may occur in early valuation stages, before option selection. Here, we show behavior in multi-attribute choice is best explained by a model involving competition at multiple levels of representation. This hierarchical model also explains neural signals in human brain regions previously linked to valuation, including striatum, parietal and prefrontal cortex, where activity represents competition within-attribute, competition between attributes, and option selection. This multi-layered inhibition framework challenges the assumption that option values are computed before choice. Instead our results indicate a canonical competition mechanism throughout all stages of a processing hierarchy, not simply at a final choice stage. PMID:25306549
Contributions of Dynamic Systems Theory to Cognitive Development
Spencer, John P.; Austin, Andrew; Schutte, Anne R.
2015-01-01
This paper examines the contributions of dynamic systems theory to the field of cognitive development, focusing on modeling using dynamic neural fields. A brief overview highlights the contributions of dynamic systems theory and the central concepts of dynamic field theory (DFT). We then probe empirical predictions and findings generated by DFT around two examples—the DFT of infant perseverative reaching that explains the Piagetian A-not-B error, and the DFT of spatial memory that explain changes in spatial cognition in early development. A systematic review of the literature around these examples reveals that computational modeling is having an impact on empirical research in cognitive development; however, this impact does not extend to neural and clinical research. Moreover, there is a tendency for researchers to interpret models narrowly, anchoring them to specific tasks. We conclude on an optimistic note, encouraging both theoreticians and experimentalists to work toward a more theory-driven future. PMID:26052181
Complex segregation analysis of craniomandibular osteopathy in Deutsch Drahthaar dogs.
Vagt, J; Distl, O
2018-01-01
This study investigated familial relationships among Deutsch Drahthaar dogs with craniomandibular osteopathy and examined the most likely mode of inheritance. Sixteen Deutsch Drahthaar dogs with craniomandibular osteopathy were diagnosed using clinical findings, radiography or computed tomography. All 16 dogs with craniomandibular osteopathy had one common ancestor. Complex segregation analyses rejected models explaining the segregation of craniomandibular osteopathy through random environmental variation, monogenic inheritance or an additive sex effect. Polygenic and mixed major gene models sufficiently explained the segregation of craniomandibular osteopathy in the pedigree analysis and offered the most likely hypotheses. The SLC37A2:c.1332C>T variant was not found in a sample of Deutsch Drahthaar dogs with craniomandibular osteopathy, nor in healthy controls. Craniomandibular osteopathy is an inherited condition in Deutsch Drahthaar dogs and the inheritance seems to be more complex than a simple Mendelian model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fukunaga, Tsukasa; Iwasaki, Wataru
2017-01-19
With rapid advances in genome sequencing and editing technologies, systematic and quantitative analysis of animal behavior is expected to be another key to facilitating data-driven behavioral genetics. The nematode Caenorhabditis elegans is a model organism in this field. Several video-tracking systems are available for automatically recording behavioral data for the nematode, but computational methods for analyzing these data are still under development. In this study, we applied the Gaussian mixture model-based binning method to time-series postural data for 322 C. elegans strains. We revealed that the occurrence patterns of the postural states and the transition patterns among these states have a relationship as expected, and such a relationship must be taken into account to identify strains with atypical behaviors that are different from those of wild type. Based on this observation, we identified several strains that exhibit atypical transition patterns that cannot be fully explained by their occurrence patterns of postural states. Surprisingly, we found that two simple factors-overall acceleration of postural movement and elimination of inactivity periods-explained the behavioral characteristics of strains with very atypical transition patterns; therefore, computational analysis of animal behavior must be accompanied by evaluation of the effects of these simple factors. Finally, we found that the npr-1 and npr-3 mutants have similar behavioral patterns that were not predictable by sequence homology, proving that our data-driven approach can reveal the functions of genes that have not yet been characterized. We propose that elimination of inactivity periods and overall acceleration of postural change speed can explain behavioral phenotypes of strains with very atypical postural transition patterns. Our methods and results constitute guidelines for effectively finding strains that show "truly" interesting behaviors and systematically uncovering novel gene functions by bioimage-informatic approaches.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Particle-Size-Grouping Model of Precipitation Kinetics in Microalloyed Steels
NASA Astrophysics Data System (ADS)
Xu, Kun; Thomas, Brian G.
2012-03-01
The formation, growth, and size distribution of precipitates greatly affects the microstructure and properties of microalloyed steels. Computational particle-size-grouping (PSG) kinetic models based on population balances are developed to simulate precipitate particle growth resulting from collision and diffusion mechanisms. First, the generalized PSG method for collision is explained clearly and verified. Then, a new PSG method is proposed to model diffusion-controlled precipitate nucleation, growth, and coarsening with complete mass conservation and no fitting parameters. Compared with the original population-balance models, this PSG method saves significant computation and preserves enough accuracy to model a realistic range of particle sizes. Finally, the new PSG method is combined with an equilibrium phase fraction model for plain carbon steels and is applied to simulate the precipitated fraction of aluminum nitride and the size distribution of niobium carbide during isothermal aging processes. Good matches are found with experimental measurements, suggesting that the new PSG method offers a promising framework for the future development of realistic models of precipitation.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
2016-04-01
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
NASA Technical Reports Server (NTRS)
Raju, I. S.
1992-01-01
A computer program that generates three-dimensional (3D) finite element models for cracked 3D solids was written. This computer program, gensurf, uses minimal input data to generate 3D finite element models for isotropic solids with elliptic or part-elliptic cracks. These models can be used with a 3D finite element program called surf3d. This report documents this mesh generator. In this manual the capabilities, limitations, and organization of gensurf are described. The procedures used to develop 3D finite element models and the input for and the output of gensurf are explained. Several examples are included to illustrate the use of this program. Several input data files are included with this manual so that the users can edit these files to conform to their crack configuration and use them with gensurf.
AGIS: Integration of new technologies used in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria
2017-10-01
The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.
NASA Astrophysics Data System (ADS)
Aharonov, Dorit
In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.
Modeling Chagas Disease at Population Level to Explain Venezuela's Real Data
González-Parra, Gilberto; Chen-Charpentier, Benito M.; Bermúdez, Moises
2015-01-01
Objectives In this paper we present an age-structured epidemiological model for Chagas disease. This model includes the interactions between human and vector populations that transmit Chagas disease. Methods The human population is divided into age groups since the proportion of infected individuals in this population changes with age as shown by real prevalence data. Moreover, the age-structured model allows more accurate information regarding the prevalence, which can help to design more specific control programs. We apply this proposed model to data from the country of Venezuela for two periods, 1961–1971, and 1961–1991 taking into account real demographic data for these periods. Results Numerical computer simulations are presented to show the suitability of the age-structured model to explain the real data regarding prevalence of Chagas disease in each of the age groups. In addition, a numerical simulation varying the death rate of the vector is done to illustrate prevention and control strategies against Chagas disease. Conclusion The proposed model can be used to determine the effect of control strategies in different age groups. PMID:26929912
Implementing the SU(2) Symmetry for the DMRG
NASA Astrophysics Data System (ADS)
Alvarez, Gonzalo
2010-03-01
In the Density Matrix Renormalization Group (DMRG) algorithm (White, 1992), Hamiltonian symmetries play an important role. Using symmetries, the matrix representation of the Hamiltonian can be blocked. Diagonalizing each matrix block is more efficient than diagonalizing the original matrix. This talk will explain how the DMRG++ codefootnotetextarXiv:0902.3185 or Computer Physics Communications 180 (2009) 1572-1578. has been extended to handle the non-local SU(2) symmetry in a model independent way. Improvements in CPU times compared to runs with only local symmetries will be discussed for typical tight-binding models of strongly correlated electronic systems. The computational bottleneck of the algorithm, and the use of shared memory parallelization will also be addressed. Finally, a roadmap for future work on DMRG++ will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This journal contains 7 articles pertaining to astrophysics. The first article is an overview of the other 6 articles and also a tribute to Jim Wilson and his work in the fields of general relativity and numerical astrophysics. The six articles are on the following subjects: (1) computer simulations of black hole accretion; (2) calculations on the collapse of the iron core of a massive star; (3) stellar-collapse models which reveal a possible site for nucleosynthesis of elements heavier than iron; (4) modeling sources for gravitational radiation; (5) the development of a computer program for finite-difference mesh calculations and itsmore » applications to astrophysics; (6) the existence of neutrinos with nonzero rest mass are used to explain the universe. Abstracts of each of the articles were prepared separately. (SC)« less
On agent-based modeling and computational social science.
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS.
Bootstrapping in a language of thought: a formal model of numerical concept learning.
Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D
2012-05-01
In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful representational system. We provide an implemented model that is powerful enough to learn number word meanings and other related conceptual systems from naturalistic data. The model shows that bootstrapping can be made computationally and philosophically well-founded as a theory of number learning. Our approach demonstrates how learners may combine core cognitive operations to build sophisticated representations during the course of development, and how this process explains observed developmental patterns in number word learning. Copyright © 2011 Elsevier B.V. All rights reserved.
On agent-based modeling and computational social science
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS. PMID:25071642
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, L.A.
1980-06-01
In the Department of Energy test of the Edna Delcambre No. 1 well for recovery of natural gas from geopressured-geothermal brine, part of the test producted gas in excess of the amount that could be dissolved in the brine. Where this excess gas originated was unknown and several theories were proposed to explain the source. This annual report describes IGT's work to match the observed gas/water production with computer simulation. Two different theoretical models were calculated in detail using available reservoir simulators. One model considered the excess gas to be dispersed as small bubbles in pores. The other model consideredmore » the excess gas as a nearby free gas cap above the aquifer. Reservoir engineering analysis of the flow test data was used to determine the basic reservoir characteristics. The computer studies revealed that the dispersed gas model gave characteristically the wrong shape for plots of gas/water ratio, and no reasonable match of the calculated values could be made to the experimental results. The free gas cap model gave characteristically better shapes to the gas/water ratio plots if the initial edge of the free gas was only about 400 feet from the well. Because there were two other wells at approximately this distance (Delcambre No. 4 and No. 4A wells) which had a history of down-hole blowouts and mechanical problems, it appears that the source of the excess free gas is from a separate horizon which connected to the Delcambre No. 1 sand via these nearby wells. This conclusion is corroborated by the changes in gas composition when the excess gas occurs and the geological studies which indicate the nearest free gas cap to be several thousand feet away. The occurrence of this excess free gas can thus be explained by known reservoir characteristics, and no new model for gas entrapment or production is needed.« less
Learning by Explaining Examples to Oneself: A Computational Model
1992-02-01
rules. of which 28 rep~resented common senise phtysirs (e.g.. a taut rope tied to a object pulls onl it ) and 17 represented ()vr-gnerlizt inssuch as...the ,mii~ jduid( ii ot refer to anl examiplle to achieve tilie goal. thliu we cla-si tied lie goalI as beingp resolved bY EIIL( * llliimi v all mlv 1i e
Combinatorial solutions to integrable hierarchies
NASA Astrophysics Data System (ADS)
Kazarian, M. E.; Lando, S. K.
2015-06-01
This paper reviews modern approaches to the construction of formal solutions to integrable hierarchies of mathematical physics whose coefficients are answers to various enumerative problems. The relationship between these approaches and the combinatorics of symmetric groups and their representations is explained. Applications of the results to the construction of efficient computations in problems related to models of quantum field theories are described. Bibliography: 34 titles.
Linking Working Memory and Long-Term Memory: A Computational Model of the Learning of New Words
ERIC Educational Resources Information Center
Jones, Gary; Gobet, Fernand; Pine, Julian M.
2007-01-01
The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory…
Fatigue in isometric contraction in a single muscle fibre: a compartmental calcium ion flow model.
Kothiyal, K P; Ibramsha, M
1986-01-01
Fatigue in muscle is a complex biological phenomenon which has so far eluded a definite explanation. Many biochemical and physiological models have been suggested in the literature to account for the decrement in the ability of muscle to sustain a given level of force for a long time. Some of these models have been critically analysed in this paper and are shown to be not able to explain all the experimental observations. A new compartmental model based on the intracellular calcium ion movement in muscle is proposed to study the mechanical responses of a muscle fibre. Computer simulation is performed to obtain model responses in isometric contraction to an impulse and a train of stimuli of long duration. The simulated curves have been compared with experimentally observed mechanical responses of the semitendinosus muscle fibre of Rana pipiens. The comparison of computed and observed responses indicates that the proposed calcium ion model indeed accounts very well for the muscle fatigue.
Modeling listeners' emotional response to music.
Eerola, Tuomas
2012-10-01
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.
Modeling Cross-Situational Word–Referent Learning: Prior Questions
Yu, Chen; Smith, Linda B.
2013-01-01
Both adults and young children possess powerful statistical computation capabilities—they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of associative learning. This article describes a series of simulation studies and analyses designed to understand the different learning mechanisms posited by the 2 classes of models and their relation to each other. Variants of a hypothesis-testing model and a simple or dumb associative mechanism were examined under different specifications of information selection, computation, and decision. Critically, these 3 components of the models interact in complex ways. The models illustrate a fundamental tradeoff between amount of data input and powerful computations: With the selection of more information, dumb associative models can mimic the powerful learning that is accomplished by hypothesis-testing models with fewer data. However, because of the interactions among the component parts of the models, the associative model can mimic various hypothesis-testing models, producing the same learning patterns but through different internal components. The simulations argue for the importance of a compositional approach to human statistical learning: the experimental decomposition of the processes that contribute to statistical learning in human learners and models with the internal components that can be evaluated independently and together. PMID:22229490
NASA Technical Reports Server (NTRS)
Gurrola, Eric M.; Eshleman, Von R.
1990-01-01
This paper reports new developments in the buried crater model that has proved successful in explaining the anomalous strengths and polarizations of the radar echoes from the icy Galilean moons of Jupiter (Europa, Ganymede, and Callisto). The theory is extended to make predictions of the radar cross sections at all points on the surface of the moon, to compute the shape and strength of the power spectra, and to model a wavelength dependence that has been observed.
Computational Conceptual Change: An Explanation-Based Approach
2012-06-01
case for students in the control group of Chi et al. (1994a) who (1) explained blood flow in terms of the heart on a pretest , (2) read a textbook...Chi et al. (1994a) who complete a pretest about the circulatory system, read a textbook passage on the topic, and then complete a posttest to assess...model on the posttest . In total, 33% of the control group and 66% of the prompted group reached the correct mental model at the posttest . Results are
Seal, John B; Alverdy, John C; Zaborina, Olga; An, Gary
2011-09-19
There is a growing realization that alterations in host-pathogen interactions (HPI) can generate disease phenotypes without pathogen invasion. The gut represents a prime region where such HPI can arise and manifest. Under normal conditions intestinal microbial communities maintain a stable, mutually beneficial ecosystem. However, host stress can lead to changes in environmental conditions that shift the nature of the host-microbe dialogue, resulting in escalation of virulence expression, immune activation and ultimately systemic disease. Effective modulation of these dynamics requires the ability to characterize the complexity of the HPI, and dynamic computational modeling can aid in this task. Agent-based modeling is a computational method that is suited to representing spatially diverse, dynamical systems. We propose that dynamic knowledge representation of gut HPI with agent-based modeling will aid in the investigation of the pathogenesis of gut-derived sepsis. An agent-based model (ABM) of virulence regulation in Pseudomonas aeruginosa was developed by translating bacterial and host cell sense-and-response mechanisms into behavioral rules for computational agents and integrated into a virtual environment representing the host-microbe interface in the gut. The resulting gut milieu ABM (GMABM) was used to: 1) investigate a potential clinically relevant laboratory experimental condition not yet developed--i.e. non-lethal transient segmental intestinal ischemia, 2) examine the sufficiency of existing hypotheses to explain experimental data--i.e. lethality in a model of major surgical insult and stress, and 3) produce behavior to potentially guide future experimental design--i.e. suggested sample points for a potential laboratory model of non-lethal transient intestinal ischemia. Furthermore, hypotheses were generated to explain certain discrepancies between the behaviors of the GMABM and biological experiments, and new investigatory avenues proposed to test those hypotheses. Agent-based modeling can account for the spatio-temporal dynamics of an HPI, and, even when carried out with a relatively high degree of abstraction, can be useful in the investigation of system-level consequences of putative mechanisms operating at the individual agent level. We suggest that an integrated and iterative heuristic relationship between computational modeling and more traditional laboratory and clinical investigations, with a focus on identifying useful and sufficient degrees of abstraction, will enhance the efficiency and translational productivity of biomedical research.
2011-01-01
Background There is a growing realization that alterations in host-pathogen interactions (HPI) can generate disease phenotypes without pathogen invasion. The gut represents a prime region where such HPI can arise and manifest. Under normal conditions intestinal microbial communities maintain a stable, mutually beneficial ecosystem. However, host stress can lead to changes in environmental conditions that shift the nature of the host-microbe dialogue, resulting in escalation of virulence expression, immune activation and ultimately systemic disease. Effective modulation of these dynamics requires the ability to characterize the complexity of the HPI, and dynamic computational modeling can aid in this task. Agent-based modeling is a computational method that is suited to representing spatially diverse, dynamical systems. We propose that dynamic knowledge representation of gut HPI with agent-based modeling will aid in the investigation of the pathogenesis of gut-derived sepsis. Methodology/Principal Findings An agent-based model (ABM) of virulence regulation in Pseudomonas aeruginosa was developed by translating bacterial and host cell sense-and-response mechanisms into behavioral rules for computational agents and integrated into a virtual environment representing the host-microbe interface in the gut. The resulting gut milieu ABM (GMABM) was used to: 1) investigate a potential clinically relevant laboratory experimental condition not yet developed - i.e. non-lethal transient segmental intestinal ischemia, 2) examine the sufficiency of existing hypotheses to explain experimental data - i.e. lethality in a model of major surgical insult and stress, and 3) produce behavior to potentially guide future experimental design - i.e. suggested sample points for a potential laboratory model of non-lethal transient intestinal ischemia. Furthermore, hypotheses were generated to explain certain discrepancies between the behaviors of the GMABM and biological experiments, and new investigatory avenues proposed to test those hypotheses. Conclusions/Significance Agent-based modeling can account for the spatio-temporal dynamics of an HPI, and, even when carried out with a relatively high degree of abstraction, can be useful in the investigation of system-level consequences of putative mechanisms operating at the individual agent level. We suggest that an integrated and iterative heuristic relationship between computational modeling and more traditional laboratory and clinical investigations, with a focus on identifying useful and sufficient degrees of abstraction, will enhance the efficiency and translational productivity of biomedical research. PMID:21929759
Marrotte, R R; Gonzalez, A; Millien, V
2014-08-01
We evaluated the effect of habitat and landscape characteristics on the population genetic structure of the white-footed mouse. We develop a new approach that uses numerical optimization to define a model that combines site differences and landscape resistance to explain the genetic differentiation between mouse populations inhabiting forest patches in southern Québec. We used ecological distance computed from resistance surfaces with Circuitscape to infer the effect of the landscape matrix on gene flow. We calculated site differences using a site index of habitat characteristics. A model that combined site differences and resistance distances explained a high proportion of the variance in genetic differentiation and outperformed models that used geographical distance alone. Urban and agriculture-related land uses were, respectively, the most and the least resistant landscape features influencing gene flow. Our method detected the effect of rivers and highways as highly resistant linear barriers. The density of grass and shrubs on the ground best explained the variation in the site index of habitat characteristics. Our model indicates that movement of white-footed mouse in this region is constrained along routes of low resistance. Our approach can generate models that may improve predictions of future northward range expansion of this small mammal. © 2014 John Wiley & Sons Ltd.
Uncertain behaviours of integrated circuits improve computational performance.
Yoshimura, Chihiro; Yamaoka, Masanao; Hayashi, Masato; Okuyama, Takuya; Aoki, Hidetaka; Kawarabayashi, Ken-ichi; Mizuno, Hiroyuki
2015-11-20
Improvements to the performance of conventional computers have mainly been achieved through semiconductor scaling; however, scaling is reaching its limitations. Natural phenomena, such as quantum superposition and stochastic resonance, have been introduced into new computing paradigms to improve performance beyond these limitations. Here, we explain that the uncertain behaviours of devices due to semiconductor scaling can improve the performance of computers. We prototyped an integrated circuit by performing a ground-state search of the Ising model. The bit errors of memory cell devices holding the current state of search occur probabilistically by inserting fluctuations into dynamic device characteristics, which will be actualised in the future to the chip. As a result, we observed more improvements in solution accuracy than that without fluctuations. Although the uncertain behaviours of devices had been intended to be eliminated in conventional devices, we demonstrate that uncertain behaviours has become the key to improving computational performance.
Bunker, Alex; Magarkar, Aniket; Viitala, Tapani
2016-10-01
Combined experimental and computational studies of lipid membranes and liposomes, with the aim to attain mechanistic understanding, result in a synergy that makes possible the rational design of liposomal drug delivery system (LDS) based therapies. The LDS is the leading form of nanoscale drug delivery platform, an avenue in drug research, known as "nanomedicine", that holds the promise to transcend the current paradigm of drug development that has led to diminishing returns. Unfortunately this field of research has, so far, been far more successful in generating publications than new drug therapies. This partly results from the trial and error based methodologies used. We discuss experimental techniques capable of obtaining mechanistic insight into LDS structure and behavior. Insight obtained purely experimentally is, however, limited; computational modeling using molecular dynamics simulation can provide insight not otherwise available. We review computational research, that makes use of the multiscale modeling paradigm, simulating the phospholipid membrane with all atom resolution and the entire liposome with coarse grained models. We discuss in greater detail the computational modeling of liposome PEGylation. Overall, we wish to convey the power that lies in the combined use of experimental and computational methodologies; we hope to provide a roadmap for the rational design of LDS based therapies. Computational modeling is able to provide mechanistic insight that explains the context of experimental results and can also take the lead and inspire new directions for experimental research into LDS development. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg. Copyright © 2016 Elsevier B.V. All rights reserved.
On the Floating Point Performance of the i860 Microprocessor
NASA Technical Reports Server (NTRS)
Lee, King; Kutler, Paul (Technical Monitor)
1997-01-01
The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.
NASA Technical Reports Server (NTRS)
Goldstein, M. L.
1977-01-01
In a study of cosmic ray propagation in interstellar and interplanetary space, a perturbed orbit resonant scattering theory for pitch angle diffusion in a slab model of magnetostatic turbulence is slightly generalized and used to compute the diffusion coefficient for spatial propagation parallel to the mean magnetic field. This diffusion coefficient has been useful for describing the solar modulation of the galactic cosmic rays, and for explaining the diffusive phase in solar flares in which the initial anisotropy of the particle distribution decays to isotropy.
Wei, Feng; Hunley, Stanley C; Powell, John W; Haut, Roger C
2011-02-01
Recent studies, using two different manners of foot constraint, potted and taped, document altered failure characteristics in the human cadaver ankle under controlled external rotation of the foot. The posterior talofibular ligament (PTaFL) was commonly injured when the foot was constrained in potting material, while the frequency of deltoid ligament injury was higher for the taped foot. In this study an existing multibody computational modeling approach was validated to include the influence of foot constraint, determine the kinematics of the joint under external foot rotation, and consequently obtain strains in various ligaments. It was hypothesized that the location of ankle injury due to excessive levels of external foot rotation is a function of foot constraint. The results from this model simulation supported this hypothesis and helped to explain the mechanisms of injury in the cadaver experiments. An excessive external foot rotation might generate a PTaFL injury for a rigid foot constraint, and an anterior deltoid ligament injury for a pliant foot constraint. The computational models may be further developed and modified to simulate the human response for different shoe designs, as well as on various athletic shoe-surface interfaces, so as to provide a computational basis for optimizing athletic performance with minimal injury risk.
NASA Astrophysics Data System (ADS)
Tanioka, Y.; Miranda, G. J. A.; Gusman, A. R.
2017-12-01
Recently, tsunami early warning technique has been improved using tsunami waveforms observed at the ocean bottom pressure gauges such as NOAA DART system or DONET and S-NET systems in Japan. However, for tsunami early warning of near field tsunamis, it is essential to determine appropriate source models using seismological analysis before large tsunamis hit the coast, especially for tsunami earthquakes which generated significantly large tsunamis. In this paper, we develop a technique to determine appropriate source models from which appropriate tsunami inundation along the coast can be numerically computed The technique is tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off Central America. In this study, fault parameters were estimated from the W-phase inversion, then the fault length and width were determined from scaling relationships. At first, the slip amount was calculated from the seismic moment with a constant rigidity of 3.5 x 10**10N/m2. The tsunami numerical simulation was carried out and compared with the observed tsunami. For the 1992 Nicaragua tsunami earthquake, the computed tsunami was much smaller than the observed one. For the 2004 El Astillero earthquake, the computed tsunami was overestimated. In order to solve this problem, we constructed a depth dependent rigidity curve, similar to suggested by Bilek and Lay (1999). The curve with a central depth estimated by the W-phase inversion was used to calculate the slip amount of the fault model. Using those new slip amounts, tsunami numerical simulation was carried out again. Then, the observed tsunami heights, run-up heights, and inundation areas for the 1992 Nicaragua tsunami earthquake were well explained by the computed one. The other tsunamis from the other three earthquakes were also reasonably well explained by the computed ones. Therefore, our technique using a depth dependent rigidity curve is worked to estimate an appropriate fault model which reproduces tsunami heights near the coast in Central America. The technique may be worked in the other subduction zones by finding a depth dependent rigidity curve in that particular subduction zone.
Schnall, Rebecca; Bakken, Suzanne
2011-09-01
To assess the applicability of the Technology Acceptance Model (TAM) constructs in explaining HIV case managers' behavioural intention to use a continuity of care record (CCR) with context-specific links designed to meet their information needs. Data were collected from 94 case managers who provide care to persons living with HIV (PLWH) using an online survey comprising three components: (1) demographic information: age, gender, ethnicity, race, Internet usage and computer experience; (2) mock-up of CCR with context-specific links; and items related to TAM constructs. Data analysis included: principal components factor analysis (PCA), assessment of internal consistency reliability and univariate and multivariate analysis. PCA extracted three factors (Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use), explained variance = 84.9%, Cronbach's ά = 0.69-0.91. In a linear regression model, Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use explained 43.6% (p < 0.001) of the variance in Behavioural Intention to use a CCR with context-specific links. Our study contributes to the evidence base regarding TAM in health care through expanding the type of professional surveyed, study setting and Health Information Technology assessed.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Coarse-Grained Models for Protein-Cell Membrane Interactions
Bradley, Ryan; Radhakrishnan, Ravi
2015-01-01
The physiological properties of biological soft matter are the product of collective interactions, which span many time and length scales. Recent computational modeling efforts have helped illuminate experiments that characterize the ways in which proteins modulate membrane physics. Linking these models across time and length scales in a multiscale model explains how atomistic information propagates to larger scales. This paper reviews continuum modeling and coarse-grained molecular dynamics methods, which connect atomistic simulations and single-molecule experiments with the observed microscopic or mesoscale properties of soft-matter systems essential to our understanding of cells, particularly those involved in sculpting and remodeling cell membranes. PMID:26613047
Quantitative Diagnosis of Continuous-Valued, Stead-State Systems
NASA Technical Reports Server (NTRS)
Rouquette, N.
1995-01-01
Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.
NASA Astrophysics Data System (ADS)
Hartin, C.; Lynch, C.; Kravitz, B.; Link, R. P.; Bond-Lamberty, B. P.
2017-12-01
Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn't explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields' temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and "memory" in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.
Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue.
D'Angelo, Egidio; Antonietti, Alberto; Casali, Stefano; Casellato, Claudia; Garrido, Jesus A; Luque, Niceto Rafael; Mapelli, Lisa; Masoli, Stefano; Pedrocchi, Alessandra; Prestori, Francesca; Rizza, Martina Francesca; Ros, Eduardo
2016-01-01
The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate "realistic" models using a bottom-up strategy accounted for both detailed connectivity and neuronal non-linear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closed-loop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems.
Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue
D’Angelo, Egidio; Antonietti, Alberto; Casali, Stefano; Casellato, Claudia; Garrido, Jesus A.; Luque, Niceto Rafael; Mapelli, Lisa; Masoli, Stefano; Pedrocchi, Alessandra; Prestori, Francesca; Rizza, Martina Francesca; Ros, Eduardo
2016-01-01
The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate “realistic” models using a bottom-up strategy accounted for both detailed connectivity and neuronal non-linear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closed-loop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems. PMID:27458345
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
Mid-infrared interferometry of Seyfert galaxies: Challenging the Standard Model
NASA Astrophysics Data System (ADS)
López-Gonzaga, N.; Jaffe, W.
2016-06-01
Aims: We aim to find torus models that explain the observed high-resolution mid-infrared (MIR) measurements of active galactic nuclei (AGN). Our goal is to determine the general properties of the circumnuclear dusty environments. Methods: We used the MIR interferometric data of a sample of AGNs provided by the instrument MIDI/VLTI and followed a statistical approach to compare the observed distribution of the interferometric measurements with the distributions computed from clumpy torus models. We mainly tested whether the diversity of Seyfert galaxies can be described using the Standard Model idea, where differences are solely due to a line-of-sight (LOS) effect. In addition to the LOS effects, we performed different realizations of the same model to include possible variations that are caused by the stochastic nature of the dusty models. Results: We find that our entire sample of AGNs, which contains both Seyfert types, cannot be explained merely by an inclination effect and by including random variations of the clouds. Instead, we find that each subset of Seyfert type can be explained by different models, where the filling factor at the inner radius seems to be the largest difference. For the type 1 objects we find that about two thirds of our objects could also be described using a dusty torus similar to the type 2 objects. For the remaining third, it was not possible to find a good description using models with high filling factors, while we found good fits with models with low filling factors. Conclusions: Within our model assumptions, we did not find one single set of model parameters that could simultaneously explain the MIR data of all 21 AGN with LOS effects and random variations alone. We conclude that at least two distinct cloud configurations are required to model the differences in Seyfert galaxies, with volume-filling factors differing by a factor of about 5-10. A continuous transition between the two types cannot be excluded.
Representational geometry: integrating cognition, computation, and the brain
Kriegeskorte, Nikolaus; Kievit, Rogier A.
2013-01-01
The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. PMID:23876494
The Lagrangian Ensemble metamodel for simulating plankton ecosystems
NASA Astrophysics Data System (ADS)
Woods, J. D.
2005-10-01
This paper presents a detailed account of the Lagrangian Ensemble (LE) metamodel for simulating plankton ecosystems. It uses agent-based modelling to describe the life histories of many thousands of individual plankters. The demography of each plankton population is computed from those life histories. So too is bio-optical and biochemical feedback to the environment. The resulting “virtual ecosystem” is a comprehensive simulation of the plankton ecosystem. It is based on phenotypic equations for individual micro-organisms. LE modelling differs significantly from population-based modelling. The latter uses prognostic equations to compute demography and biofeedback directly. LE modelling diagnoses them from the properties of individual micro-organisms, whose behaviour is computed from prognostic equations. That indirect approach permits the ecosystem to adjust gracefully to changes in exogenous forcing. The paper starts with theory: it defines the Lagrangian Ensemble metamodel and explains how LE code performs a number of computations “behind the curtain”. They include budgeting chemicals, and deriving biofeedback and demography from individuals. The next section describes the practice of LE modelling. It starts with designing a model that complies with the LE metamodel. Then it describes the scenario for exogenous properties that provide the computation with initial and boundary conditions. These procedures differ significantly from those used in population-based modelling. The next section shows how LE modelling is used in research, teaching and planning. The practice depends largely on hindcasting to overcome the limits to predictability of weather forecasting. The scientific method explains observable ecosystem phenomena in terms of finer-grained processes that cannot be observed, but which are controlled by the basic laws of physics, chemistry and biology. What-If? Prediction ( WIP), used for planning, extends hindcasting by adding events that describe natural or man-made hazards and remedial actions. Verification is based on the Ecological Turing Test, which takes account of uncertainties in the observed and simulated versions of a target ecological phenomenon. The rest of the paper is devoted to a case study designed to show what LE modelling offers the biological oceanographer. The case study is presented in two parts. The first documents the WB model (Woods & Barkmann, 1994) and scenario used to simulate the ecosystem in a mesocosm moored in deep water off the Azores. The second part illustrates the emergent properties of that virtual ecosystem. The behaviour and development of an individual plankton lineage are revealed by an audit trail of the agent used in the computation. The fields of environmental properties reveal the impact of biofeedback. The fields of demographic properties show how changes in individuals cumulatively affect the birth and death rates of their population. This case study documents the virtual ecosystem used by Woods, Perilli and Barkmann (2005; hereafter WPB); to investigate the stability of simulations created by the Lagrangian Ensemble metamodel. The Azores virtual ecosystem was created and analysed on the Virtual Ecology Workbench (VEW) which is described briefly in the Appendix.
Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
The anomalous demagnetization behaviour of chondritic meteorites
NASA Astrophysics Data System (ADS)
Morden, S. J.
1992-06-01
Alternating field (AF) demagnetization of chondritic samples often shows anomalous results such as large directional and intensity changes; 'saw-tooth' intensity vs. demagnetizing field curves are also prevalent. An attempt to explain this behaviour is presented, using a computer model in which individual 'mineral grains' can be 'magnetized' in a variety of different ways. A simulated demagnetization can then be carried out to examine the results. It was found that the experimental behaviour of chondrites can be successfully mimicked by loading the computer model with a series of randomly orientated and sized vectors. The parameters of the model can be changed to reflect different trends seen in experimental data. Many published results can be modelled using this method. A known magnetic mineralogy can be modelled, and an unknown mineralogy deduced from AF demagnetization curves. Only by comparing data from mutually orientated samples can true stable regions for palaeointensity measurements be identified, calling into question some previous estimates of field strength from meteorites.
An SSH key management system: easing the pain of managing key/user/account associations
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Betts, W.; Lauret, J.; Shiryaev, A.
2008-07-01
Cyber security requirements for secure access to computing facilities often call for access controls via gatekeepers and the use of two-factor authentication. Using SSH keys to satisfy the two factor authentication requirement has introduced a potentially challenging task of managing the keys and their associations with individual users and user accounts. Approaches for a facility with the simple model of one remote user corresponding to one local user would not work at facilities that require a many-to-many mapping between users and accounts on multiple systems. We will present an SSH key management system we developed, tested and deployed to address the many-to-many dilemma in the environment of the STAR experiment. We will explain its use in an online computing context and explain how it makes possible the management and tracing of group account access spread over many sub-system components (data acquisition, slow controls, trigger, detector instrumentation, etc.) without the use of shared passwords for remote logins.
NASA Technical Reports Server (NTRS)
Holman, gordon; Dennis Brian R.; Tolbert, Anne K.; Schwartz, Richard
2010-01-01
Solar nonthermal hard X-ray (HXR) flare spectra often cannot be fitted by a single power law, but rather require a downward break in the photon spectrum. A possible explanation for this spectral break is nonuniform ionization in the emission region. We have developed a computer code to calculate the photon spectrum from electrons with a power-law distribution injected into a thick-target in which the ionization decreases linearly from 100% to zero. We use the bremsstrahlung cross-section from Haug (1997), which closely approximates the full relativistic Bethe-Heitler cross-section, and compare photon spectra computed from this model with those obtained by Kontar, Brown and McArthur (2002), who used a step-function ionization model and the Kramers approximation to the cross-section. We find that for HXR spectra from a target with nonuniform ionization, the difference (Delta-gamma) between the power-law indexes above and below the break has an upper limit between approx.0.2 and 0.7 that depends on the power-law index delta of the injected electron distribution. A broken power-law spectrum with a. higher value of Delta-gamma cannot result from nonuniform ionization alone. The model is applied to spectra obtained around the peak times of 20 flares observed by the Ramaty High Energy Solar Spectroscopic Imager (RHESSI from 2002 to 2004 to determine whether thick-target nonuniform ionization can explain the measured spectral breaks. A Monte Carlo method is used to determine the uncertainties of the best-fit parameters, especially on Delta-gamma. We find that 15 of the 20 flare spectra require a downward spectral break and that at least 6 of these could not be explained by nonuniform ionization alone because they had values of Delta-gamma with less than a 2.5% probability of being consistent with the computed upper limits from the model. The remaining 9 flare spectra, based on this criterion, are consistent with the nonuniform ionization model.
ERIC Educational Resources Information Center
Wilensky, Uri; Reisman, Kenneth
2006-01-01
Biological phenomena can be investigated at multiple levels, from the molecular to the cellular to the organismic to the ecological. In typical biology instruction, these levels have been segregated. Yet, it is by examining the connections between such levels that many phenomena in biology, and complex systems in general, are best explained. We…
A computational developmental model for specificity and transfer in perceptual learning.
Solgi, Mojtaba; Liu, Taosheng; Weng, Juyang
2013-01-04
How and under what circumstances the training effects of perceptual learning (PL) transfer to novel situations is critical to our understanding of generalization and abstraction in learning. Although PL is generally believed to be highly specific to the trained stimulus, a series of psychophysical studies have recently shown that training effects can transfer to untrained conditions under certain experimental protocols. In this article, we present a brain-inspired, neuromorphic computational model of the Where-What visuomotor pathways which successfully explains both the specificity and transfer of perceptual learning. The major architectural novelty is that each feature neuron has both sensory and motor inputs. The network of neurons is autonomously developed from experience, using a refined Hebbian-learning rule and lateral competition, which altogether result in neuronal recruitment. Our hypothesis is that certain paradigms of experiments trigger two-way (descending and ascending) off-task processes about the untrained condition which lead to recruitment of more neurons in lower feature representation areas as well as higher concept representation areas for the untrained condition, hence the transfer. We put forward a novel proposition that gated self-organization of the connections during the off-task processes accounts for the observed transfer effects. Simulation results showed transfer of learning across retinal locations in a Vernier discrimination task in a double-training procedure, comparable to previous psychophysical data (Xiao et al., 2008). To the best of our knowledge, this model is the first neurally-plausible model to explain both transfer and specificity in a PL setting.
NASA Astrophysics Data System (ADS)
Mathieu, Jean-Philippe; Inal, Karim; Berveiller, Sophie; Diard, Olivier
2010-11-01
Local approach to brittle fracture for low-alloyed steels is discussed in this paper. A bibliographical introduction intends to highlight general trends and consensual points of the topic and evokes debatable aspects. French RPV steel 16MND5 (equ. ASTM A508 Cl.3), is then used as a model material to study the influence of temperature on brittle fracture. A micromechanical modelling of brittle fracture at the elementary volume scale already used in previous work is then recalled. It involves a multiscale modelling of microstructural plasticity which has been tuned on experimental inter-phase and inter-granular stresses heterogeneities measurements. Fracture probability of the elementary volume can then be computed using a randomly attributed defect size distribution based on realistic carbides repartition. This defect distribution is then deterministically correlated to stress heterogeneities simulated within the microstructure using a weakest-link hypothesis on the elementary volume, which results in a deterministic stress to fracture. Repeating the process allows to compute Weibull parameters on the elementary volume. This tool is then used to investigate the physical mechanisms that could explain the already experimentally observed temperature dependence of Beremin's parameter for 16MND5 steel. It is showed that, assuming that the hypothesis made in this work about cleavage micro-mechanisms are correct, effective equivalent surface energy (i.e. surface energy plus plastically dissipated energy when blunting the crack tip) for propagating a crack has to be temperature dependent to explain Beremin's parameters temperature evolution.
Toward synthesizing executable models in biology.
Fisher, Jasmin; Piterman, Nir; Bodik, Rastislav
2014-01-01
Over the last decade, executable models of biological behaviors have repeatedly provided new scientific discoveries, uncovered novel insights, and directed new experimental avenues. These models are computer programs whose execution mechanistically simulates aspects of the cell's behaviors. If the observed behavior of the program agrees with the observed biological behavior, then the program explains the phenomena. This approach has proven beneficial for gaining new biological insights and directing new experimental avenues. One advantage of this approach is that techniques for analysis of computer programs can be applied to the analysis of executable models. For example, one can confirm that a model agrees with experiments for all possible executions of the model (corresponding to all environmental conditions), even if there are a huge number of executions. Various formal methods have been adapted for this context, for example, model checking or symbolic analysis of state spaces. To avoid manual construction of executable models, one can apply synthesis, a method to produce programs automatically from high-level specifications. In the context of biological modeling, synthesis would correspond to extracting executable models from experimental data. We survey recent results about the usage of the techniques underlying synthesis of computer programs for the inference of biological models from experimental data. We describe synthesis of biological models from curated mutation experiment data, inferring network connectivity models from phosphoproteomic data, and synthesis of Boolean networks from gene expression data. While much work has been done on automated analysis of similar datasets using machine learning and artificial intelligence, using synthesis techniques provides new opportunities such as efficient computation of disambiguating experiments, as well as the ability to produce different kinds of models automatically from biological data.
NASA Astrophysics Data System (ADS)
Ming, Mei-Jun; Xu, Long-Kun; Wang, Fan; Bi, Ting-Jun; Li, Xiang-Yuan
2017-07-01
In this work, a matrix form of numerical algorithm for spectral shift is presented based on the novel nonequilibrium solvation model that is established by introducing the constrained equilibrium manipulation. This form is convenient for the development of codes for numerical solution. By means of the integral equation formulation polarizable continuum model (IEF-PCM), a subroutine has been implemented to compute spectral shift numerically. Here, the spectral shifts of absorption spectra for several popular chromophores, N,N-diethyl-p-nitroaniline (DEPNA), methylenecyclopropene (MCP), acrolein (ACL) and p-nitroaniline (PNA) were investigated in different solvents with various polarities. The computed spectral shifts can explain the available experimental findings reasonably. Discussions were made on the contributions of solute geometry distortion, electrostatic polarization and other non-electrostatic interactions to spectral shift.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaggero, Daniele; Urbano, Alfredo; Valli, Mauro
We compute the γ-ray and neutrino diffuse emission of the Galaxy on the basis of a recently proposed phenomenological model characterized by radially dependent cosmic-ray (CR) transport properties. We show how this model, designed to reproduce both Fermi-LAT γ-ray data and local CR observables, naturally reproduces the anomalous TeV diffuse emission observed by Milagro in the inner Galactic plane. Above 100 TeV our picture predicts a neutrino flux that is about five (two) times larger than the neutrino flux computed with conventional models in the Galactic Center region (full-sky). Explaining in that way up to ∼25% of the flux measuredmore » by IceCube, we reproduce the full-sky IceCube spectrum adding an extra-Galactic component derived from the muonic neutrinos flux in the northern hemisphere. We also present precise predictions for the Galactic plane region where the flux is dominated by the Galactic emission.« less
Critical branching neural networks.
Kello, Christopher T
2013-01-01
It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.
Computer Analysis of Spectrum Anomaly in 32-GHz Traveling-Wave Tube for Cassini Mission
NASA Technical Reports Server (NTRS)
Dayton, James A., Jr.; Wilson, Jeffrey D.; Kory, Carol L.
1999-01-01
Computer modeling of the 32-GHz traveling-wave tube (TWT) for the Cassini Mission was conducted to explain the anomaly observed in the spectrum analysis of one of the flight-model tubes. The analysis indicated that the effect, manifested as a weak signal in the neighborhood of 35 GHz, was an intermodulation product of the 32-GHz drive signal with a 66.9-GHz oscillation induced by coupling to the second harmonic'signal. The oscillation occurred only at low- radiofrequency (RF) drive power levels that are not expected during the Cassini Mission. The conclusion was that the anomaly was caused by a generic defect inadvertently incorporated in the geometric design of the slow-wave circuit and that it would not change as the TWT aged. The most probable effect of aging on tube performance would be a reduction in the electron beam current. The computer modeling indicated that although not likely to occur within the mission lifetime, a reduction in beam current would reduce or eliminate the anomaly but would do so at the cost of reduced RF output power.
The biomechanical effect of clavicular shortening on shoulder muscle function, a simulation study.
Hillen, Robert J; Bolsterlee, Bart; Veeger, Dirkjan H E J
2016-08-01
Malunion of the clavicle with shortening after mid shaft fractures can give rise to long-term residual complaints. The cause of these complaints is as yet unclear. In this study we analysed data of an earlier experimental cadaveric study on changes of shoulder biomechanics with progressive shortening of the clavicle. The data was used in a musculoskeletal computer model to examine the effect of clavicle shortening on muscle function, expressed as maximal muscle moments for abduction and internal rotation. Clavicle shortening results in changes of maximal muscle moments around the shoulder girdle. The mean values at 3.6cm of shortening of maximal muscle moment changes are 16% decreased around the sterno-clavicular joint decreased for both ab- and adduction, 37% increased around the acromion-clavicular joint for adduction and 32% decrease for internal rotation around the gleno-humeral joint in resting position. Shortening of the clavicle affects muscle function in the shoulder in a computer model. This may explain for the residual complaints after short malunion with shortening. Basic Science Study. Biomechanics. Cadaveric data and computer model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bramley, Neil R; Lagnado, David A; Speekenbrink, Maarten
2015-05-01
Interacting with a system is key to uncovering its causal structure. A computational framework for interventional causal learning has been developed over the last decade, but how real causal learners might achieve or approximate the computations entailed by this framework is still poorly understood. Here we describe an interactive computer task in which participants were incentivized to learn the structure of probabilistic causal systems through free selection of multiple interventions. We develop models of participants' intervention choices and online structure judgments, using expected utility gain, probability gain, and information gain and introducing plausible memory and processing constraints. We find that successful participants are best described by a model that acts to maximize information (rather than expected score or probability of being correct); that forgets much of the evidence received in earlier trials; but that mitigates this by being conservative, preferring structures consistent with earlier stated beliefs. We explore 2 heuristics that partly explain how participants might be approximating these models without explicitly representing or updating a hypothesis space. (c) 2015 APA, all rights reserved).
The role of mechanics during brain development
NASA Astrophysics Data System (ADS)
Budday, Silvia; Steinmann, Paul; Kuhl, Ellen
2014-12-01
Convolutions are a classical hallmark of most mammalian brains. Brain surface morphology is often associated with intelligence and closely correlated with neurological dysfunction. Yet, we know surprisingly little about the underlying mechanisms of cortical folding. Here we identify the role of the key anatomic players during the folding process: cortical thickness, stiffness, and growth. To establish estimates for the critical time, pressure, and the wavelength at the onset of folding, we derive an analytical model using the Föppl-von Kármán theory. Analytical modeling provides a quick first insight into the critical conditions at the onset of folding, yet it fails to predict the evolution of complex instability patterns in the post-critical regime. To predict realistic surface morphologies, we establish a computational model using the continuum theory of finite growth. Computational modeling not only confirms our analytical estimates, but is also capable of predicting the formation of complex surface morphologies with asymmetric patterns and secondary folds. Taken together, our analytical and computational models explain why larger mammalian brains tend to be more convoluted than smaller brains. Both models provide mechanistic interpretations of the classical malformations of lissencephaly and polymicrogyria. Understanding the process of cortical folding in the mammalian brain has direct implications on the diagnostics of neurological disorders including severe retardation, epilepsy, schizophrenia, and autism.
The role of mechanics during brain development
Budday, Silvia; Steinmann, Paul; Kuhl, Ellen
2014-01-01
Convolutions are a classical hallmark of most mammalian brains. Brain surface morphology is often associated with intelligence and closely correlated to neurological dysfunction. Yet, we know surprisingly little about the underlying mechanisms of cortical folding. Here we identify the role of the key anatomic players during the folding process: cortical thickness, stiffness, and growth. To establish estimates for the critical time, pressure, and the wavelength at the onset of folding, we derive an analytical model using the Föppl-von-Kármán theory. Analytical modeling provides a quick first insight into the critical conditions at the onset of folding, yet it fails to predict the evolution of complex instability patterns in the post-critical regime. To predict realistic surface morphologies, we establish a computational model using the continuum theory of finite growth. Computational modeling not only confirms our analytical estimates, but is also capable of predicting the formation of complex surface morphologies with asymmetric patterns and secondary folds. Taken together, our analytical and computational models explain why larger mammalian brains tend to be more convoluted than smaller brains. Both models provide mechanistic interpretations of the classical malformations of lissencephaly and polymicrogyria. Understanding the process of cortical folding in the mammalian brain has direct implications on the diagnostics of neurological disorders including severe retardation, epilepsy, schizophrenia, and autism. PMID:25202162
A class of all digital phase locked loops - Modeling and analysis
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1973-01-01
An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a nonlinear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step and frequency step inputs for different levels of quantization without loop filter are studied. The analytical results are checked by simulating the actual system on the digital computer.
NASA Technical Reports Server (NTRS)
Wu, S. T.
1974-01-01
The responses of the solar atmosphere due to an outward propagation shock are examined by employing the Lax-Wendroff method to solve the set of nonlinear partial differential equations in the model of the solar atmosphere. It is found that this theoretical model can be used to explain the solar phenomena of surge and spray. A criterion to discriminate the surge and spray is established and detailed information concerning the density, velocity, and temperature distribution with respect to the height and time is presented. The complete computer program is also included.
Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J.
2015-01-01
The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888
Is realistic neuronal modeling realistic?
Almog, Mara
2016-01-01
Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models. PMID:27535372
Pre-supernova models at low metallicities
NASA Astrophysics Data System (ADS)
Hirschi, Raphael
¢ A series of fast rotating models at very low metallicity (Z 10 8 ) was computed in order to¡ explain the surface abundances observed at the surface of CEMP stars, in particular for nitrogen. The main results are the following: - Strong mixing occurs during He-burning and leads to important primary nitrogen produc- tion. - Important mass loss takes place in the RSG stage for the most massive models. The 85 M£ model loses about three quarter of its initial mass, becomes a WO star and could produce a GRB. - The CNO elements of HE1327-2326 could have been produced in massive rotating stars and ejected by their stellar winds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, David; Sutcliffe, Paul
Recent results suggest that multi-Skyrmions stabilized by {omega} mesons have very similar properties to those stabilized by the Skyrme term. In this paper we present the results of a detailed numerical investigation of a (2+1)-dimensional analogue of this situation. Namely, we compute solitons in an O(3) {sigma} model coupled to a massive vector meson and compare the results to baby Skyrmions, which are solitons in an O(3) {sigma} model including a Skyrme term. We find that multisolitons in the vector meson model are surprisingly similar to those in the baby Skyrme model, and we explain this correspondence using a simplemore » derivative expansion.« less
The rise of machine consciousness: studying consciousness with computational models.
Reggia, James A
2013-08-01
Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises, and making predictions concerning future developments. Copyright © 2013 Elsevier Ltd. All rights reserved.
A framework for analyzing the cognitive complexity of computer-assisted clinical ordering.
Horsky, Jan; Kaufman, David R; Oppenheim, Michael I; Patel, Vimla L
2003-01-01
Computer-assisted provider order entry is a technology that is designed to expedite medical ordering and to reduce the frequency of preventable errors. This paper presents a multifaceted cognitive methodology for the characterization of cognitive demands of a medical information system. Our investigation was informed by the distributed resources (DR) model, a novel approach designed to describe the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction. We conducted an expert walkthrough evaluation of a commercial order entry system, followed by a simulated clinical ordering task performed by seven clinicians. The DR model was employed to explain variation in user performance and to characterize the relationship of resource distribution and ordering errors. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The resources model also provided some insight into clinicians' interactive strategies and patterns of associated errors. Implications for user training and interface design based on the principles of human-computer interaction in the medical domain are discussed.
Healthy and pathological cerebellar Spiking Neural Networks in Vestibulo-Ocular Reflex.
Antonietti, Alberto; Casellato, Claudia; Geminiani, Alice; D'Angelo, Egidio; Pedrocchi, Alessandra
2015-01-01
Since the Marr-Albus model, computational neuroscientists have been developing a variety of models of the cerebellum, with different approaches and features. In this work, we developed and tested realistic artificial Spiking Neural Networks inspired to this brain region. We tested in computational simulations of the Vestibulo-Ocular Reflex protocol three different models: a network equipped with a single plasticity site, at the cortical level; a network equipped with a distributed plasticity, at both cortical and nuclear levels; a network with a pathological plasticity mechanism at the cortical level. We analyzed the learning performance of the three different models, highlighting the behavioral differences among them. We proved that the model with a distributed plasticity produces a faster and more accurate cerebellar response, especially during a second session of acquisition, compared with the single plasticity model. Furthermore, the pathological model shows an impaired learning capability in Vestibulo-Ocular Reflex acquisition, as found in neurophysiological studies. The effect of the different plasticity conditions, which change fast and slow dynamics, memory consolidation and, in general, learning capabilities of the cerebellar network, explains differences in the behavioral outcome.
In defence of model-based inference in phylogeography
Beaumont, Mark A.; Nielsen, Rasmus; Robert, Christian; Hey, Jody; Gaggiotti, Oscar; Knowles, Lacey; Estoup, Arnaud; Panchal, Mahesh; Corander, Jukka; Hickerson, Mike; Sisson, Scott A.; Fagundes, Nelson; Chikhi, Lounès; Beerli, Peter; Vitalis, Renaud; Cornuet, Jean-Marie; Huelsenbeck, John; Foll, Matthieu; Yang, Ziheng; Rousset, Francois; Balding, David; Excoffier, Laurent
2017-01-01
Recent papers have promoted the view that model-based methods in general, and those based on Approximate Bayesian Computation (ABC) in particular, are flawed in a number of ways, and are therefore inappropriate for the analysis of phylogeographic data. These papers further argue that Nested Clade Phylogeographic Analysis (NCPA) offers the best approach in statistical phylogeography. In order to remove the confusion and misconceptions introduced by these papers, we justify and explain the reasoning behind model-based inference. We argue that ABC is a statistically valid approach, alongside other computational statistical techniques that have been successfully used to infer parameters and compare models in population genetics. We also examine the NCPA method and highlight numerous deficiencies, either when used with single or multiple loci. We further show that the ages of clades are carelessly used to infer ages of demographic events, that these ages are estimated under a simple model of panmixia and population stationarity but are then used under different and unspecified models to test hypotheses, a usage the invalidates these testing procedures. We conclude by encouraging researchers to study and use model-based inference in population genetics. PMID:29284924
Gary H. Elsner
1979-01-01
Computers can analyze and help to plan the visual aspects of large wildland landscapes. This paper categorizes and explains current computer methods available. It also contains a futuristic dialogue between a landscape architect and a computer.
Biomimetic design processes in architecture: morphogenetic and evolutionary computational design.
Menges, Achim
2012-03-01
Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies.
Multitasking as a choice: a perspective.
Broeker, Laura; Liepelt, Roman; Poljac, Edita; Künzell, Stefan; Ewolds, Harald; de Oliveira, Rita F; Raab, Markus
2018-01-01
Performance decrements in multitasking have been explained by limitations in cognitive capacity, either modelled as static structural bottlenecks or as the scarcity of overall cognitive resources that prevent humans, or at least restrict them, from processing two tasks at the same time. However, recent research has shown that individual differences, flexible resource allocation, and prioritization of tasks cannot be fully explained by these accounts. We argue that understanding human multitasking as a choice and examining multitasking performance from the perspective of judgment and decision-making (JDM), may complement current dual-task theories. We outline two prominent theories from the area of JDM, namely Simple Heuristics and the Decision Field Theory, and adapt these theories to multitasking research. Here, we explain how computational modelling techniques and decision-making parameters used in JDM may provide a benefit to understanding multitasking costs and argue that these techniques and parameters have the potential to predict multitasking behavior in general, and also individual differences in behavior. Finally, we present the one-reason choice metaphor to explain a flexible use of limited capacity as well as changes in serial and parallel task processing. Based on this newly combined approach, we outline a concrete interdisciplinary future research program that we think will help to further develop multitasking research.
Computational model of lightness perception in high dynamic range imaging
NASA Astrophysics Data System (ADS)
Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter
2006-02-01
An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.
NASA Technical Reports Server (NTRS)
Jansen, B. J., Jr.
1998-01-01
The features of the data acquisition and control systems of the NASA Langley Research Center's Jet Noise Laboratory are presented. The Jet Noise Laboratory is a facility that simulates realistic mixed flow turbofan jet engine nozzle exhaust systems in simulated flight. The system is capable of acquiring data for a complete take-off assessment of noise and nozzle performance. This paper describes the development of an integrated system to control and measure the behavior of model jet nozzles featuring dual independent high pressure combusting air streams with wind tunnel flow. The acquisition and control system is capable of simultaneous measurement of forces, moments, static and dynamic model pressures and temperatures, and jet noise. The design concepts for the coordination of the control computers and multiple data acquisition computers and instruments are discussed. The control system design and implementation are explained, describing the features, equipment, and the experiences of using a primarily Personal Computer based system. Areas for future development are examined.
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits
LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.
2014-01-01
Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145
Sensitivity to the Sampling Process Emerges From the Principle of Efficiency.
Jara-Ettinger, Julian; Sun, Felix; Schulz, Laura; Tenenbaum, Joshua B
2018-05-01
Humans can seamlessly infer other people's preferences, based on what they do. Broadly, two types of accounts have been proposed to explain different aspects of this ability. The first account focuses on spatial information: Agents' efficient navigation in space reveals what they like. The second account focuses on statistical information: Uncommon choices reveal stronger preferences. Together, these two lines of research suggest that we have two distinct capacities for inferring preferences. Here we propose that this is not the case, and that spatial-based and statistical-based preference inferences can be explained by the assumption that agents are efficient alone. We show that people's sensitivity to spatial and statistical information when they infer preferences is best predicted by a computational model of the principle of efficiency, and that this model outperforms dual-system models, even when the latter are fit to participant judgments. Our results suggest that, as adults, a unified understanding of agency under the principle of efficiency underlies our ability to infer preferences. Copyright © 2018 Cognitive Science Society, Inc.
Sutherland, Clare A M; Liu, Xizi; Zhang, Lingshan; Chu, Yingtung; Oldmeadow, Julian A; Young, Andrew W
2018-04-01
People form first impressions from facial appearance rapidly, and these impressions can have considerable social and economic consequences. Three dimensions can explain Western perceivers' impressions of Caucasian faces: approachability, youthful-attractiveness, and dominance. Impressions along these dimensions are theorized to be based on adaptive cues to threat detection or sexual selection, making it likely that they are universal. We tested whether the same dimensions of facial impressions emerge across culture by building data-driven models of first impressions of Asian and Caucasian faces derived from Chinese and British perceivers' unconstrained judgments. We then cross-validated the dimensions with computer-generated average images. We found strong evidence for common approachability and youthful-attractiveness dimensions across perceiver and face race, with some evidence of a third dimension akin to capability. The models explained ~75% of the variance in facial impressions. In general, the findings demonstrate substantial cross-cultural agreement in facial impressions, especially on the most salient dimensions.
Emergent neutrality drives phytoplankton species coexistence
Segura, Angel M.; Calliari, Danilo; Kruk, Carla; Conde, Daniel; Bonilla, Sylvia; Fort, Hugo
2011-01-01
The mechanisms that drive species coexistence and community dynamics have long puzzled ecologists. Here, we explain species coexistence, size structure and diversity patterns in a phytoplankton community using a combination of four fundamental factors: organism traits, size-based constraints, hydrology and species competition. Using a ‘microscopic’ Lotka–Volterra competition (MLVC) model (i.e. with explicit recipes to compute its parameters), we provide a mechanistic explanation of species coexistence along a niche axis (i.e. organismic volume). We based our model on empirically measured quantities, minimal ecological assumptions and stochastic processes. In nature, we found aggregated patterns of species biovolume (i.e. clumps) along the volume axis and a peak in species richness. Both patterns were reproduced by the MLVC model. Observed clumps corresponded to niche zones (volumes) where species fitness was highest, or where fitness was equal among competing species. The latter implies the action of equalizing processes, which would suggest emergent neutrality as a plausible mechanism to explain community patterns. PMID:21177680
Targeted intervention: Computational approaches to elucidate and predict relapse in alcoholism.
Heinz, Andreas; Deserno, Lorenz; Zimmermann, Ulrich S; Smolka, Michael N; Beck, Anne; Schlagenhauf, Florian
2017-05-01
Alcohol use disorder (AUD) and addiction in general is characterized by failures of choice resulting in repeated drug intake despite severe negative consequences. Behavioral change is hard to accomplish and relapse after detoxification is common and can be promoted by consumption of small amounts of alcohol as well as exposure to alcohol-associated cues or stress. While those environmental factors contributing to relapse have long been identified, the underlying psychological and neurobiological mechanism on which those factors act are to date incompletely understood. Based on the reinforcing effects of drugs of abuse, animal experiments showed that drug, cue and stress exposure affect Pavlovian and instrumental learning processes, which can increase salience of drug cues and promote habitual drug intake. In humans, computational approaches can help to quantify changes in key learning mechanisms during the development and maintenance of alcohol dependence, e.g. by using sequential decision making in combination with computational modeling to elucidate individual differences in model-free versus more complex, model-based learning strategies and their neurobiological correlates such as prediction error signaling in fronto-striatal circuits. Computational models can also help to explain how alcohol-associated cues trigger relapse: mechanisms such as Pavlovian-to-Instrumental Transfer can quantify to which degree Pavlovian conditioned stimuli can facilitate approach behavior including alcohol seeking and intake. By using generative models of behavioral and neural data, computational approaches can help to quantify individual differences in psychophysiological mechanisms that underlie the development and maintenance of AUD and thus promote targeted intervention. Copyright © 2016 Elsevier Inc. All rights reserved.
Are Quantum Models for Order Effects Quantum?
NASA Astrophysics Data System (ADS)
Moreira, Catarina; Wichert, Andreas
2017-12-01
The application of principles of Quantum Mechanics in areas outside of physics has been getting increasing attention in the scientific community in an emergent disciplined called Quantum Cognition. These principles have been applied to explain paradoxical situations that cannot be easily explained through classical theory. In quantum probability, events are characterised by a superposition state, which is represented by a state vector in a N-dimensional vector space. The probability of an event is given by the squared magnitude of the projection of this superposition state into the desired subspace. This geometric approach is very useful to explain paradoxical findings that involve order effects, but do we really need quantum principles for models that only involve projections? This work has two main goals. First, it is still not clear in the literature if a quantum projection model has any advantage towards a classical projection. We compared both models and concluded that the Quantum Projection model achieves the same results as its classical counterpart, because the quantum interference effects play no role in the computation of the probabilities. Second, it intends to propose an alternative relativistic interpretation for rotation parameters that are involved in both classical and quantum models. In the end, instead of interpreting these parameters as a similarity measure between questions, we propose that they emerge due to the lack of knowledge concerned with a personal basis state and also due to uncertainties towards the state of world and towards the context of the questions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P.; McGrath, Liam R.; Whitney, Paul D.
2011-11-17
We present a computational approach to radical rhetoric that leverages the co-expression of rhetoric and action features in discourse to identify violent intent. The approach combines text mining and machine learning techniques with insights from Frame Analysis and theories that explain the emergence of violence in terms of moral disengagement, the violation of sacred values and social isolation in order to build computational models that identify messages from terrorist sources and estimate their proximity to an attack. We discuss a specific application of this approach to a body of documents from and about radical and terrorist groups in the Middlemore » East and present the results achieved.« less
Quantum processing by remote quantum control
NASA Astrophysics Data System (ADS)
Qiang, Xiaogang; Zhou, Xiaoqi; Aungskunsiri, Kanin; Cable, Hugo; O'Brien, Jeremy L.
2017-12-01
Client-server models enable computations to be hosted remotely on quantum servers. We present a novel protocol for realizing this task, with practical advantages when using technology feasible in the near term. Client tasks are realized as linear combinations of operations implemented by the server, where the linear coefficients are hidden from the server. We report on an experimental demonstration of our protocol using linear optics, which realizes linear combination of two single-qubit operations by a remote single-qubit control. In addition, we explain when our protocol can remain efficient for larger computations, as well as some ways in which privacy can be maintained using our protocol.
ERIC Educational Resources Information Center
Hung, Y.-C.
2012-01-01
This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…
Mapping nonlinear receptive field structure in primate retina at single cone resolution
Li, Peter H; Greschner, Martin; Gunning, Deborah E; Mathieson, Keith; Sher, Alexander; Litke, Alan M; Paninski, Liam
2015-01-01
The function of a neural circuit is shaped by the computations performed by its interneurons, which in many cases are not easily accessible to experimental investigation. Here, we elucidate the transformation of visual signals flowing from the input to the output of the primate retina, using a combination of large-scale multi-electrode recordings from an identified ganglion cell type, visual stimulation targeted at individual cone photoreceptors, and a hierarchical computational model. The results reveal nonlinear subunits in the circuity of OFF midget ganglion cells, which subserve high-resolution vision. The model explains light responses to a variety of stimuli more accurately than a linear model, including stimuli targeted to cones within and across subunits. The recovered model components are consistent with known anatomical organization of midget bipolar interneurons. These results reveal the spatial structure of linear and nonlinear encoding, at the resolution of single cells and at the scale of complete circuits. DOI: http://dx.doi.org/10.7554/eLife.05241.001 PMID:26517879
Bayesian analysis of caustic-crossing microlensing events
NASA Astrophysics Data System (ADS)
Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.
2010-06-01
Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.
Substorm injection boundaries. [magnetospheric electric field model
NASA Technical Reports Server (NTRS)
Mcilwain, C. E.
1974-01-01
An improved magnetospheric electric field model is used to compute the initial locations of particles injected by several substorms. Trajectories are traced from the time of their encounter with the ATS-5 satellite backwards to the onset time given by ground-based magnetometers. A spiral shaped inner boundary of injection is found which is quite similar to that found by a statistical analysis. This injection boundary is shown to move in an energy dependent fashion which can explain the soft energy spectra observed at the inner edge of the electrons plasma sheet.
Shear viscosity coefficient of liquid lanthanides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, H. P., E-mail: patel.harshal2@gmail.com; Thakor, P. B., E-mail: pbthakore@rediffmail.com; Prajapati, A. V., E-mail: anand0prajapati@gmail.com
2015-05-15
Present paper deals with the computation of shear viscosity coefficient (η) of liquid lanthanides. The effective pair potential v(r) is calculated through our newly constructed model potential. The Pair distribution function g(r) is calculated from PYHS reference system. To see the influence of local field correction function, Hartree (H), Tailor (T) and Sarkar et al (S) local field correction function are used. Present results are compared with available experimental as well as theoretical data. Lastly, we found that our newly constructed model potential successfully explains the shear viscosity coefficient (η) of liquid lanthanides.
Shear viscosity coefficient of liquid lanthanides
NASA Astrophysics Data System (ADS)
Patel, H. P.; Sonvane, Y. A.; Thakor, P. B.; Prajapati, A. V.
2015-05-01
Present paper deals with the computation of shear viscosity coefficient (η) of liquid lanthanides. The effective pair potential v(r) is calculated through our newly constructed model potential. The Pair distribution function g(r) is calculated from PYHS reference system. To see the influence of local field correction function, Hartree (H), Tailor (T) and Sarkar et al (S) local field correction function are used. Present results are compared with available experimental as well as theoretical data. Lastly, we found that our newly constructed model potential successfully explains the shear viscosity coefficient (η) of liquid lanthanides.
A new computational growth model for sea urchin skeletons.
Zachos, Louis G
2009-08-07
A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.
Magretta, Joan
2002-05-01
"Business model" was one of the great buzz-words of the Internet boom. A company didn't need a strategy, a special competence, or even any customers--all it needed was a Web-based business model that promised wild profits in some distant, ill-defined future. Many people--investors, entrepreneurs, and executives alike--fell for the fantasy and got burned. And as the inevitable counterreaction played out, the concept of the business model fell out of fashion nearly as quickly as the .com appendage itself. That's a shame. As Joan Magretta explains, a good business model remains essential to every successful organization, whether it's a new venture or an established player. To help managers apply the concept successfully, she defines what a business model is and how it complements a smart competitive strategy. Business models are, at heart, stories that explain how enterprises work. Like a good story, a robust business model contains precisely delineated characters, plausible motivations, and a plot that turns on an insight about value. It answers certain questions: Who is the customer? How do we make money? What underlying economic logic explains how we can deliver value to customers at an appropriate cost? Every viable organization is built on a sound business model, but a business model isn't a strategy, even though many people use the terms interchangeably. Business models describe, as a system, how the pieces of a business fit together. But they don't factor in one critical dimension of performance: competition. That's the job of strategy. Illustrated with examples from companies like American Express, EuroDisney, WalMart, and Dell Computer, this article clarifies the concepts of business models and strategy, which are fundamental to every company's performance.
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
Model of climate evolution based on continental drift and polar wandering
NASA Technical Reports Server (NTRS)
Donn, W. L.; Shaw, D. M.
1977-01-01
The thermodynamic meteorologic model of Adem is used to trace the evolution of climate from Triassic to present time by applying it to changing geography as described by continental drift and polar wandering. Results show that the gross changes of climate in the Northern Hemisphere can be fully explained by the strong cooling in high latitudes as continents moved poleward. High-latitude mean temperatures in the Northern Hemisphere dropped below the freezing point 10 to 15 m.y. ago, thereby accounting for the late Cenozoic glacial age. Computed meridional temperature gradients for the Northern Hemisphere steepened from 20 to 40 C over the 200-m.y. period, an effect caused primarily by the high-latitude temperature decrease. The primary result of the work is that the cooling that has occurred since the warm Mesozoic period and has culminated in glaciation is explainable wholly by terrestrial processes.
Introduction to Concepts in Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Niebur, Dagmar
1995-01-01
This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.
Representational geometry: integrating cognition, computation, and the brain.
Kriegeskorte, Nikolaus; Kievit, Rogier A
2013-08-01
The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. Copyright © 2013 Elsevier Ltd. All rights reserved.
A computer program for analyzing the energy consumption of automatically controlled lighting systems
NASA Astrophysics Data System (ADS)
1982-01-01
A computer code to predict the performance of controlled lighting systems with respect to their energy saving capabilities is presented. The computer program provides a mathematical model from which comparisons of control schemes can be made on an economic basis only. The program does not calculate daylighting, but uses daylighting values as input. The program can analyze any of three power input versus light output relationships, continuous dimming with a linear response, continuous dimming with a nonlinear response, or discrete stepped response. Any of these options can be used with or without daylighting, making six distinct modes of control system operation. These relationships are described in detail. The major components of the program are discussed and examples are included to explain how to run the program.
Extracting transient Rayleigh wave and its application in detecting quality of highway roadbed
Liu, J.; Xia, J.; Luo, Y.; Li, X.; Xu, S.; ,
2004-01-01
This paper first explains the tau-p mapping method of extracting Rayleigh waves (LR waves) from field shot gathers. It also explains a mathematical model of physical character parameters of quality of high-grade roads. This paper then discusses an algorithm of computing dispersion curves using adjacent channels. Shear velocity and physical character parameters are obtained by inversion of dispersion curves. The algorithm using adjacent channels to calculating dispersion curves eliminates average effects that exist by using multi-channels to obtain dispersion curves so that it improves longitudinal and transverse resolution of LR waves and precision of non-invasive detection, and also broadens its application fields. By analysis of modeling results of detached computation of the ground roll and real examples of detecting density and pressure strength of a high-grade roadbed, and by comparison of shallow seismic image method with borehole cores, we concluded that: 1 the abnormal scale and configuration obtained by LR waves are mostly the same as the result of shallow seismic image method; 2 an average relative error of density obtained from LR waves inversion is 1.6% comparing with borehole coring; 3 transient LR waves in detecting density and pressure strength of a high-grade roadbed is feasible and effective.
Sale, Mark; Sherer, Eric A
2015-01-01
The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792
SPSS and SAS programming for the testing of mediation models.
Dudley, William N; Benuzillo, Jose G; Carrico, Mineh S
2004-01-01
Mediation modeling can explain the nature of the relation among three or more variables. In addition, it can be used to show how a variable mediates the relation between levels of intervention and outcome. The Sobel test, developed in 1990, provides a statistical method for determining the influence of a mediator on an intervention or outcome. Although interactive Web-based and stand-alone methods exist for computing the Sobel test, SPSS and SAS programs that automatically run the required regression analyses and computations increase the accessibility of mediation modeling to nursing researchers. To illustrate the utility of the Sobel test and to make this programming available to the Nursing Research audience in both SAS and SPSS. The history, logic, and technical aspects of mediation testing are introduced. The syntax files sobel.sps and sobel.sas, created to automate the computation of the regression analysis and test statistic, are available from the corresponding author. The reported programming allows the user to complete mediation testing with the user's own data in a single-step fashion. A technical manual included with the programming provides instruction on program use and interpretation of the output. Mediation modeling is a useful tool for describing the relation between three or more variables. Programming and manuals for using this model are made available.
Stienen, Bernard M C; Schindler, Konrad; de Gelder, Beatrice
2012-07-01
Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.
Annual Rainfall Forecasting by Using Mamdani Fuzzy Inference System
NASA Astrophysics Data System (ADS)
Fallah-Ghalhary, G.-A.; Habibi Nokhandan, M.; Mousavi Baygi, M.
2009-04-01
Long-term rainfall prediction is very important to countries thriving on agro-based economy. In general, climate and rainfall are highly non-linear phenomena in nature giving rise to what is known as "butterfly effect". The parameters that are required to predict the rainfall are enormous even for a short period. Soft computing is an innovative approach to construct computationally intelligent systems that are supposed to possess humanlike expertise within a specific domain, adapt themselves and learn to do better in changing environments, and explain how they make decisions. Unlike conventional artificial intelligence techniques the guiding principle of soft computing is to exploit tolerance for imprecision, uncertainty, robustness, partial truth to achieve tractability, and better rapport with reality. In this paper, 33 years of rainfall data analyzed in khorasan state, the northeastern part of Iran situated at latitude-longitude pairs (31°-38°N, 74°- 80°E). this research attempted to train Fuzzy Inference System (FIS) based prediction models with 33 years of rainfall data. For performance evaluation, the model predicted outputs were compared with the actual rainfall data. Simulation results reveal that soft computing techniques are promising and efficient. The test results using by FIS model showed that the RMSE was obtained 52 millimeter.
Allen Newell's Program of Research: The Video-Game Test.
Gobet, Fernand
2017-04-01
Newell (1973) argued that progress in psychology was slow because research focused on experiments trying to answer binary questions, such as serial versus parallel processing. In addition, not enough attention was paid to the strategies used by participants, and there was a lack of theories implemented as computer models offering sufficient precision for being tested rigorously. He proposed a three-headed research program: to develop computational models able to carry out the task they aimed to explain; to study one complex task in detail, such as chess; and to build computational models that can account for multiple tasks. This article assesses the extent to which the papers in this issue advance Newell's program. While half of the papers devote much attention to strategies, several papers still average across them, a capital sin according to Newell. The three courses of action he proposed were not popular in these papers: Only two papers used computational models, with no model being both able to carry out the task and to account for human data; there was no systematic analysis of a specific video game; and no paper proposed a computational model accounting for human data in several tasks. It is concluded that, while they use sophisticated methods of analysis and discuss interesting results, overall these papers contribute only little to Newell's program of research. In this respect, they reflect the current state of psychology and cognitive science. This is a shame, as Newell's ideas might help address the current crisis of lack of replication and fraud in psychology. Copyright © 2017 The Author. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2010 CFR
2010-10-01
... software. Computer software—(1) Means (i) Computer programs that comprise a series of instructions, rules... or computer software documentation. Computer software documentation means owner's manuals, user's... medium, that explain the capabilities of the computer software or provide instructions for using the...
Conn, Paul B.; Johnson, Devin S.; Ver Hoef, Jay M.; Hooten, Mevin B.; London, Joshua M.; Boveng, Peter L.
2015-01-01
Ecologists often fit models to survey data to estimate and explain variation in animal abundance. Such models typically require that animal density remains constant across the landscape where sampling is being conducted, a potentially problematic assumption for animals inhabiting dynamic landscapes or otherwise exhibiting considerable spatiotemporal variation in density. We review several concepts from the burgeoning literature on spatiotemporal statistical models, including the nature of the temporal structure (i.e., descriptive or dynamical) and strategies for dimension reduction to promote computational tractability. We also review several features as they specifically relate to abundance estimation, including boundary conditions, population closure, choice of link function, and extrapolation of predicted relationships to unsampled areas. We then compare a suite of novel and existing spatiotemporal hierarchical models for animal count data that permit animal density to vary over space and time, including formulations motivated by resource selection and allowing for closed populations. We gauge the relative performance (bias, precision, computational demands) of alternative spatiotemporal models when confronted with simulated and real data sets from dynamic animal populations. For the latter, we analyze spotted seal (Phoca largha) counts from an aerial survey of the Bering Sea where the quantity and quality of suitable habitat (sea ice) changed dramatically while surveys were being conducted. Simulation analyses suggested that multiple types of spatiotemporal models provide reasonable inference (low positive bias, high precision) about animal abundance, but have potential for overestimating precision. Analysis of spotted seal data indicated that several model formulations, including those based on a log-Gaussian Cox process, had a tendency to overestimate abundance. By contrast, a model that included a population closure assumption and a scale prior on total abundance produced estimates that largely conformed to our a priori expectation. Although care must be taken to tailor models to match the study population and survey data available, we argue that hierarchical spatiotemporal statistical models represent a powerful way forward for estimating abundance and explaining variation in the distribution of dynamical populations.
PERSONAL COMPUTERS AND ENVIRONMENTAL ENGINEERING
This article discusses how personal computers can be applied to environmental engineering. fter explaining some of the differences between mainframe and Personal computers, we will review the development of personal computers and describe the areas of data management, interactive...
Computational analysis of the regulation of Ca2+ dynamics in rat ventricular myocytes
NASA Astrophysics Data System (ADS)
Bugenhagen, Scott M.; Beard, Daniel A.
2015-10-01
Force-frequency relationships of isolated cardiac myocytes show complex behaviors that are thought to be specific to both the species and the conditions associated with the experimental preparation. Ca2+ signaling plays an important role in shaping the force-frequency relationship, and understanding the properties of the force-frequency relationship in vivo requires an understanding of Ca2+ dynamics under physiologically relevant conditions. Ca2+ signaling is itself a complicated process that is best understood on a quantitative level via biophysically based computational simulation. Although a large number of models are available in the literature, the models are often a conglomeration of components parameterized to data of incompatible species and/or experimental conditions. In addition, few models account for modulation of Ca2+ dynamics via β-adrenergic and calmodulin-dependent protein kinase II (CaMKII) signaling pathways even though they are hypothesized to play an important regulatory role in vivo. Both protein-kinase-A and CaMKII are known to phosphorylate a variety of targets known to be involved in Ca2+ signaling, but the effects of these pathways on the frequency- and inotrope-dependence of Ca2+ dynamics are not currently well understood. In order to better understand Ca2+ dynamics under physiological conditions relevant to rat, a previous computational model is adapted and re-parameterized to a self-consistent dataset obtained under physiological temperature and pacing frequency and updated to include β-adrenergic and CaMKII regulatory pathways. The necessity of specific effector mechanisms of these pathways in capturing inotrope- and frequency-dependence of the data is tested by attempting to fit the data while including and/or excluding those effector components. We find that: (1) β-adrenergic-mediated phosphorylation of the L-type calcium channel (LCC) (and not of phospholamban (PLB)) is sufficient to explain the inotrope-dependence; and (2) that CaMKII-mediated regulation of neither the LCC nor of PLB is required to explain the frequency-dependence of the data.
A computational and neural model of momentary subjective well-being
Rutledge, Robb B.; Skandali, Nikolina; Dayan, Peter; Dolan, Raymond J.
2014-01-01
The subjective well-being or happiness of individuals is an important metric for societies. Although happiness is influenced by life circumstances and population demographics such as wealth, we know little about how the cumulative influence of daily life events are aggregated into subjective feelings. Using computational modeling, we show that emotional reactivity in the form of momentary happiness in response to outcomes of a probabilistic reward task is explained not by current task earnings, but by the combined influence of recent reward expectations and prediction errors arising from those expectations. The robustness of this account was evident in a large-scale replication involving 18,420 participants. Using functional MRI, we show that the very same influences account for task-dependent striatal activity in a manner akin to the influences underpinning changes in happiness. PMID:25092308
Learning sorting algorithms through visualization construction
NASA Astrophysics Data System (ADS)
Cetin, Ibrahim; Andrews-Larson, Christine
2016-01-01
Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and instructors dealing with computational thinking.
Gyrofluid Modeling of Turbulent, Kinetic Physics
NASA Astrophysics Data System (ADS)
Despain, Kate Marie
2011-12-01
Gyrofluid models to describe plasma turbulence combine the advantages of fluid models, such as lower dimensionality and well-developed intuition, with those of gyrokinetics models, such as finite Larmor radius (FLR) effects. This allows gyrofluid models to be more tractable computationally while still capturing much of the physics related to the FLR of the particles. We present a gyrofluid model derived to capture the behavior of slow solar wind turbulence and describe the computer code developed to implement the model. In addition, we describe the modifications we made to a gyrofluid model and code that simulate plasma turbulence in tokamak geometries. Specifically, we describe a nonlinear phase mixing phenomenon, part of the E x B term, that was previously missing from the model. An inherently FLR effect, it plays an important role in predicting turbulent heat flux and diffusivity levels for the plasma. We demonstrate this importance by comparing results from the updated code to studies done previously by gyrofluid and gyrokinetic codes. We further explain what would be necessary to couple the updated gyrofluid code, gryffin, to a turbulent transport code, thus allowing gryffin to play a role in predicting profiles for fusion devices such as ITER and to explore novel fusion configurations. Such a coupling would require the use of Graphical Processing Units (GPUs) to make the modeling process fast enough to be viable. Consequently, we also describe our experience with GPU computing and demonstrate that we are poised to complete a gryffin port to this innovative architecture.
32 CFR 310.53 - Computer matching agreements (CMAs).
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 2 2013-07-01 2013-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...
32 CFR 310.53 - Computer matching agreements (CMAs).
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 2 2014-07-01 2014-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...
32 CFR 310.53 - Computer matching agreements (CMAs).
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 2 2012-07-01 2012-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...
32 CFR 310.53 - Computer matching agreements (CMAs).
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 2 2010-07-01 2010-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...
32 CFR 310.53 - Computer matching agreements (CMAs).
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 2 2011-07-01 2011-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...
A Computational Model of a Descending Mechanosensory Pathway Involved in Active Tactile Sensing
Ache, Jan M.; Dürr, Volker
2015-01-01
Many animals, including humans, rely on active tactile sensing to explore the environment and negotiate obstacles, especially in the dark. Here, we model a descending neural pathway that mediates short-latency proprioceptive information from a tactile sensor on the head to thoracic neural networks. We studied the nocturnal stick insect Carausius morosus, a model organism for the study of adaptive locomotion, including tactually mediated reaching movements. Like mammals, insects need to move their tactile sensors for probing the environment. Cues about sensor position and motion are therefore crucial for the spatial localization of tactile contacts and the coordination of fast, adaptive motor responses. Our model explains how proprioceptive information about motion and position of the antennae, the main tactile sensors in insects, can be encoded by a single type of mechanosensory afferents. Moreover, it explains how this information is integrated and mediated to thoracic neural networks by a diverse population of descending interneurons (DINs). First, we quantified responses of a DIN population to changes in antennal position, motion and direction of movement. Using principal component (PC) analysis, we find that only two PCs account for a large fraction of the variance in the DIN response properties. We call the two-dimensional space spanned by these PCs ‘coding-space’ because it captures essential features of the entire DIN population. Second, we model the mechanoreceptive input elements of this descending pathway, a population of proprioceptive mechanosensory hairs monitoring deflection of the antennal joints. Finally, we propose a computational framework that can model the response properties of all important DIN types, using the hair field model as its only input. This DIN model is validated by comparison of tuning characteristics, and by mapping the modelled neurons into the two-dimensional coding-space of the real DIN population. This reveals the versatility of the framework for modelling a complete descending neural pathway. PMID:26158851
Recrystallization and Grain Growth Kinetics in Binary Alpha Titanium-Aluminum Alloys
NASA Astrophysics Data System (ADS)
Trump, Anna Marie
Titanium alloys are used in a variety of important naval and aerospace applications and often undergo thermomechanical processing which leads to recrystallization and grain growth. Both of these processes have a significant impact on the mechanical properties of the material. Therefore, understanding the kinetics of these processes is crucial to being able to predict the final properties. Three alloys are studied with varying concentrations of aluminum which allows for the direct quantification of the effect of aluminum content on the kinetics of recrystallization and grain growth. Aluminum is the most common alpha stabilizing alloying element used in titanium alloys, however the effect of aluminum on these processes has not been previously studied. This work is also part of a larger Integrated Computational Materials Engineering (ICME) effort whose goal is to combine both computational and experimental efforts to develop computationally efficient models that predict materials microstructure and properties based on processing history. The static recrystallization kinetics are measured using an electron backscatter diffraction (EBSD) technique and a significant retardation in the kinetics is observed with increasing aluminum concentration. An analytical model is then used to capture these results and is able to successfully predict the effect of solute concentration on the time to 50% recrystallization. The model reveals that this solute effect is due to a combination of a decrease in grain boundary mobility and a decrease in driving force with increasing aluminum concentration. The effect of microstructural inhomogeneities is also experimentally quantified and the results are validated with a phase field model for recrystallization. These microstructural inhomogeneities explain the experimentally measured Avrami exponent, which is lower than the theoretical value calculated by the JMAK model. Similar to the effect seen in recrystallization, the addition of aluminum also significantly slows downs the grain growth kinetics. This is generally attributed to the solute drag effect due to segregation of solute atoms at the grain boundaries, however aluminum segregation is not observed in these alloys. The mechanism for this result is explained and is used to validate the prediction of an existing model for solute drag.
A class of all digital phase locked loops - Modelling and analysis.
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1972-01-01
An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a non-linear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step, and frequency step inputs for different levels of quantization without loop filter, are studied. The analytical results are checked by simulating the actual system on the digital computer.
The Student-Teacher-Computer Team: Focus on the Computer.
ERIC Educational Resources Information Center
Ontario Inst. for Studies in Education, Toronto.
Descriptions of essential computer elements, logic and programing techniques, and computer applications are provided in an introductory handbook for use by educators and students. Following a brief historical perspective, the organization of a computer system is schematically illustrated, functions of components are explained in non-technical…
Superior model for fault tolerance computation in designing nano-sized circuit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my
2014-10-24
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less
Kodera, Sachiko; Gomez-Tames, Jose; Hirata, Akimasa; Masuda, Hiroshi; Arima, Takuji; Watanabe, Soichi
2017-01-01
The rapid development of wireless technology has led to widespread concerns regarding adverse human health effects caused by exposure to electromagnetic fields. Temperature elevation in biological bodies is an important factor that can adversely affect health. A thermophysiological model is desired to quantify microwave (MW) induced temperature elevations. In this study, parameters related to thermophysiological responses for MW exposures were estimated using an electromagnetic-thermodynamics simulation technique. To the authors’ knowledge, this is the first study in which parameters related to regional cerebral blood flow in a rat model were extracted at a high degree of accuracy through experimental measurements for localized MW exposure at frequencies exceeding 6 GHz. The findings indicate that the improved modeling parameters yield computed results that match well with the measured quantities during and after exposure in rats. It is expected that the computational model will be helpful in estimating the temperature elevation in the rat brain at multiple observation points (that are difficult to measure simultaneously) and in explaining the physiological changes in the local cortex region. PMID:28358345
Viscoacoustic model for near-field ultrasonic levitation.
Melikhov, Ivan; Chivilikhin, Sergey; Amosov, Alexey; Jeanson, Romain
2016-11-01
Ultrasonic near-field levitation allows for contactless support and transportation of an object over vibrating surface. We developed an accurate model predicting pressure distribution in the gap between the surface and levitating object. The formulation covers a wide range of the air flow regimes: from viscous squeezed flow dominating in small gap to acoustic wave propagation in larger gap. The paper explains derivation of the governing equations from the basic fluid dynamics. The nonreflective boundary conditions were developed to properly define air flow at the outlet. Comparing to direct computational fluid dynamics modeling our approach allows achieving good accuracy while keeping the computation cost low. Using the model we studied the levitation force as a function of gap distance. It was shown that there are three distinguished flow regimes: purely viscous, viscoacoustic, and acoustic. The regimes are defined by the balance of viscous and inertial forces. In the viscous regime the pressure in the gap is close to uniform while in the intermediate viscoacoustic and the acoustic regimes the pressure profile is wavy. The model was validated by a dedicated levitation experiment and compared to similar published results.
Viscoacoustic model for near-field ultrasonic levitation
NASA Astrophysics Data System (ADS)
Melikhov, Ivan; Chivilikhin, Sergey; Amosov, Alexey; Jeanson, Romain
2016-11-01
Ultrasonic near-field levitation allows for contactless support and transportation of an object over vibrating surface. We developed an accurate model predicting pressure distribution in the gap between the surface and levitating object. The formulation covers a wide range of the air flow regimes: from viscous squeezed flow dominating in small gap to acoustic wave propagation in larger gap. The paper explains derivation of the governing equations from the basic fluid dynamics. The nonreflective boundary conditions were developed to properly define air flow at the outlet. Comparing to direct computational fluid dynamics modeling our approach allows achieving good accuracy while keeping the computation cost low. Using the model we studied the levitation force as a function of gap distance. It was shown that there are three distinguished flow regimes: purely viscous, viscoacoustic, and acoustic. The regimes are defined by the balance of viscous and inertial forces. In the viscous regime the pressure in the gap is close to uniform while in the intermediate viscoacoustic and the acoustic regimes the pressure profile is wavy. The model was validated by a dedicated levitation experiment and compared to similar published results.
Hierarchical Processing of Auditory Objects in Humans
Kumar, Sukhbinder; Stephan, Klaas E; Warren, Jason D; Friston, Karl J; Griffiths, Timothy D
2007-01-01
This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG), containing the primary auditory cortex, planum temporale (PT), and superior temporal sulcus (STS), and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal “templates” in the PT before further analysis of the abstracted form in anterior temporal lobe areas. PMID:17542641
Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model
Jensen, Greg; Muñoz, Fabian; Alkan, Yelda; Ferrera, Vincent P.; Terrace, Herbert S.
2015-01-01
Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort’s success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models. PMID:26407227
Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.
Jensen, Greg; Muñoz, Fabian; Alkan, Yelda; Ferrera, Vincent P; Terrace, Herbert S
2015-01-01
Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.
Beuter, Anne
2017-05-01
Recent publications call for more animal models to be used and more experiments to be performed, in order to better understand the mechanisms of neurodegenerative disorders, to improve human health, and to develop new brain stimulation treatments. In response to these calls, some limitations of the current animal models are examined by using Deep Brain Stimulation (DBS) in Parkinson's disease as an illustrative example. Without focusing on the arguments for or against animal experimentation, or on the history of DBS, the present paper argues that given recent technological and theoretical advances, the time has come to consider bioinspired computational modelling as a valid alternative to animal models, in order to design the next generation of human brain stimulation treatments. However, before computational neuroscience is fully integrated in the translational process and used as a substitute for animal models, several obstacles need to be overcome. These obstacles are examined in the context of institutional, financial, technological and behavioural lock-in. Recommendations include encouraging agreement to change long-term habitual practices, explaining what alternative models can achieve, considering economic stakes, simplifying administrative and regulatory constraints, and carefully examining possible conflicts of interest. 2017 FRAME.
Takemura, Naohiro; Fukui, Takao; Inui, Toshio
2015-01-01
In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: (i) the variability of the grip aperture, which is predicted by the Kalman filter, and (ii) the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control. PMID:26696874
Yildiz, Izzet B.; von Kriegstein, Katharina; Kiebel, Stefan J.
2013-01-01
Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments. PMID:24068902
Yildiz, Izzet B; von Kriegstein, Katharina; Kiebel, Stefan J
2013-01-01
Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents-an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pete Beckman and Ian Foster
Chicago Matters: Beyond Burnham (WTTW). Chicago has become a world center of "cloud computing." Argonne experts Pete Beckman and Ian Foster explain what "cloud computing" is and how you probably already use it on a daily basis.
Toward a unified account of comprehension and production in language development.
McCauley, Stewart M; Christiansen, Morten H
2013-08-01
Although Pickering & Garrod (P&G) argue convincingly for a unified system for language comprehension and production, they fail to explain how such a system might develop. Using a recent computational model of language acquisition as an example, we sketch a developmental perspective on the integration of comprehension and production. We conclude that only through development can we fully understand the intertwined nature of comprehension and production in adult processing.
Dynamics of convulsive seizure termination and postictal generalized EEG suppression
Bauer, Prisca R.; Thijs, Roland D.; Lamberts, Robert J.; Velis, Demetrios N.; Visser, Gerhard H.; Tolner, Else A.; Sander, Josemir W.; Lopes da Silva, Fernando H.; Kalitzin, Stiliyan N.
2017-01-01
Abstract It is not fully understood how seizures terminate and why some seizures are followed by a period of complete brain activity suppression, postictal generalized EEG suppression. This is clinically relevant as there is a potential association between postictal generalized EEG suppression, cardiorespiratory arrest and sudden death following a seizure. We combined human encephalographic seizure data with data of a computational model of seizures to elucidate the neuronal network dynamics underlying seizure termination and the postictal generalized EEG suppression state. A multi-unit computational neural mass model of epileptic seizure termination and postictal recovery was developed. The model provided three predictions that were validated in EEG recordings of 48 convulsive seizures from 48 subjects with refractory focal epilepsy (20 females, age range 15–61 years). The duration of ictal and postictal generalized EEG suppression periods in human EEG followed a gamma probability distribution indicative of a deterministic process (shape parameter 2.6 and 1.5, respectively) as predicted by the model. In the model and in humans, the time between two clonic bursts increased exponentially from the start of the clonic phase of the seizure. The terminal interclonic interval, calculated using the projected terminal value of the log-linear fit of the clonic frequency decrease was correlated with the presence and duration of postictal suppression. The projected terminal interclonic interval explained 41% of the variation in postictal generalized EEG suppression duration (P < 0.02). Conversely, postictal generalized EEG suppression duration explained 34% of the variation in the last interclonic interval duration. Our findings suggest that postictal generalized EEG suppression is a separate brain state and that seizure termination is a plastic and autonomous process, reflected in increased duration of interclonic intervals that determine the duration of postictal generalized EEG suppression. PMID:28073789
Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC
NASA Astrophysics Data System (ADS)
Alruwaili, Manal
With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.
Answering Schrödinger's question: A free-energy formulation
NASA Astrophysics Data System (ADS)
Ramstead, Maxwell James Désormeau; Badcock, Paul Benjamin; Friston, Karl John
2018-03-01
The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of living systems, and their unique capacity to avoid decay. The aim of this review is to synthesise these advances with a meta-theoretical ontology of biological systems called variational neuroethology, which integrates the FEP with Tinbergen's four research questions to explain biological systems across spatial and temporal scales. We exemplify this framework by applying it to Homo sapiens, before translating variational neuroethology into a systematic research heuristic that supplies the biological, cognitive, and social sciences with a computationally tractable guide to discovery.
Inviscid Flow Computations of the Orbital Sciences X-34 Over a Mach Number Range of 1.25 to 6.0
NASA Technical Reports Server (NTRS)
Prabhu, Ramadas K.
2001-01-01
This report documents the results of an inviscid computational study conducted on the Orbital Sciences X-34 vehicle to compute its inviscid longitudinal aerodynamic characteristics over a Mach number range of 1.25 to 6.0. The unstructured grid software FELISA was used and th e aerodynamic characteristics were computed at Mach numbers 1.25, 1.6, 2.5, 4.0, 4.63, and 6.0, and an angle of attack range of -4 to 32 degrees. These results were compared with available aerodynamic data from wind tunnel test on X-34 models. The comparison showed excellent agreement in C(sub N). The computed pitching moment compared well at Mach numbers 2.5 and higher, and at angles of attack of up to 12 deg. The agreement was not good at higher angles of attack possibly due to viscous effects. At lower Mach numbers there were significant differences between computed and measured C(sub m) values. This could not be explained. Since the present computations are inviscid, the computed C(sub A) was consistently lower than the measured values as expected.
Barnes, Richard; Clark, Adam Thomas
2017-07-01
For many taxa and systems, species richness peaks at midelevations. One potential explanation for this pattern is that large-scale changes in climate and geography have, over evolutionary time, selected for traits that are favored under conditions found in contemporary midelevation regions. To test this hypothesis, we use records of historical temperature and topographic changes over the past 65 Myr to construct a general simulation model of plethodontid salamander evolution in eastern North America. We then explore possible mechanisms constraining species to midelevation bands by using the model to predict plethodontid evolutionary history and contemporary geographic distributions. Our results show that models that incorporate both temperature and topographic changes are better able to predict these patterns, suggesting that both processes may have played an important role in driving plethodontid evolution in the region. Additionally, our model (whose annotated source code is included as a supplement) represents a proof of concept to encourage future work that takes advantage of recent advances in computing power to combine models of ecology, evolution, and earth history to better explain the abundance and distribution of species over time.
Probability Theory Plus Noise: Descriptive Estimation and Inferential Judgment.
Costello, Fintan; Watts, Paul
2018-01-01
We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. Copyright © 2018 Cognitive Science Society, Inc.
How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling
Veale, Richard; Hafed, Ziad M.
2017-01-01
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044023
Get the Whole Story before You Plug into a Computer Network.
ERIC Educational Resources Information Center
Vernot, David
1989-01-01
Explains the myths and marvels of computer networks; cites how several schools are utilizing networking; and summarizes where the major computer companies stand today when it comes to networking. (MLF)
Theoretical modeling of magnesium ion imprints in the Raman scattering of water.
Kapitán, Josef; Dracínský, Martin; Kaminský, Jakub; Benda, Ladislav; Bour, Petr
2010-03-18
Hydration envelopes of metallic ions significantly influence their chemical properties and biological functioning. Previous computational studies, nuclear magnetic resonance (NMR), and vibrational spectra indicated a strong affinity of the Mg(2+) cation to water. We find it interesting that, although monatomic ions do not vibrate themselves, they cause notable changes in the water Raman signal. Therefore, in this study, we used a combination of Raman spectroscopy and computer modeling to analyze the magnesium hydration shell and origin of the signal. In the measured spectra of several salts (LiCl, NaCl, KCl, MgCl(2), CaCl(2), MgBr(2), and MgI(2) water solutions), only the spectroscopic imprint of the hydrated Mg(2+) cation could clearly be identified as an exceptionally distinct peak at approximately 355 cm(-1). The assignment of this band to the Mg-O stretching motion could be confirmed on the basis of several models involving quantum chemical computations on metal/water clusters. Minor Raman spectral features could also be explained. Ab initio and Fourier transform (FT) techniques coupled with the Car-Parrinello molecular dynamics were adapted to provide the spectra from dynamical trajectories. The results suggest that even in concentrated solutions magnesium preferentially forms a [Mg(H(2)O)(6)](2+) complex of a nearly octahedral symmetry; nevertheless, the Raman signal is primarily associated with the relatively strong metal-H(2)O bond. Partially covalent character of the Mg-O bond was confirmed by a natural bond orbital analysis. Computations on hydrated chlorine anion did not provide a specific signal. The FT techniques gave good spectral profiles in the high-frequency region, whereas the lowest-wavenumber vibrations were better reproduced by the cluster models. Both dynamical and cluster computational models provided a useful link between spectral shapes and specific ion-water interactions.
Introduction to This Special Issue on Context-Aware Computing.
ERIC Educational Resources Information Center
Moran, Thomas P.; Dourish, Paul
2001-01-01
Discusses pervasive, or ubiquitous, computing; explains the notion of context; and defines context-aware computing as the key to disperse and enmesh computation into our lives. Considers context awareness in human-computer interaction and describes the broad topic areas of the essays included in this special issue. (LRW)
Mallon, Dermot H; Bradley, J Andrew; Winn, Peter J; Taylor, Craig J; Kosmoliaptsis, Vasilis
2015-02-01
We have previously shown that qualitative assessment of surface electrostatic potential of HLA class I molecules helps explain serological patterns of alloantibody binding. We have now used a novel computational approach to quantitate differences in surface electrostatic potential of HLA B-cell epitopes and applied this to explain HLA Bw4 and Bw6 antigenicity. Protein structure models of HLA class I alleles expressing either the Bw4 or Bw6 epitope (defined by sequence motifs at positions 77 to 83) were generated using comparative structure prediction. The electrostatic potential in 3-dimensional space encompassing the Bw4/Bw6 epitope was computed by solving the Poisson-Boltzmann equation and quantitatively compared in a pairwise, all-versus-all fashion to produce distance matrices that cluster epitopes with similar electrostatics properties. Quantitative comparison of surface electrostatic potential at the carboxyl terminal of the α1-helix of HLA class I alleles, corresponding to amino acid sequence motif 77 to 83, produced clustering of HLA molecules in 3 principal groups according to Bw4 or Bw6 epitope expression. Remarkably, quantitative differences in electrostatic potential reflected known patterns of serological reactivity better than Bw4/Bw6 amino acid sequence motifs. Quantitative assessment of epitope electrostatic potential allowed the impact of known amino acid substitutions (HLA-B*07:02 R79G, R82L, G83R) that are critical for antibody binding to be predicted. We describe a novel approach for quantitating differences in HLA B-cell epitope electrostatic potential. Proof of principle is provided that this approach enables better assessment of HLA epitope antigenicity than amino acid sequence data alone, and it may allow prediction of HLA immunogenicity.
NSR&D FY15 Final Report. Modeling Mechanical, Thermal, and Chemical Effects of Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Christopher Curtis; Ma, Xia; Zhang, Duan Zhong
2015-11-02
The main goal of this project is to develop a computer model that explains and predicts coupled mechanical, thermal and chemical responses of HE under impact and friction insults. The modeling effort is based on the LANL-developed CartaBlanca code, which is implemented with the dual domain material point (DDMP) method to calculate complex and coupled thermal, chemical and mechanical effects among fluids, solids and the transitions between the states. In FY 15, we have implemented the TEPLA material model for metal and performed preliminary can penetration simulation and begun to link with experiment. Currently, we are working on implementing amore » shock to detonation transition (SDT) model (SURF) and JWL equation of state.« less
Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael
2003-01-01
We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.
Mulas, Marcello; Waniek, Nicolai; Conradt, Jörg
2016-01-01
After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors over time due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments. PMID:26924979
The DYNAMO Simulation Language--An Alternate Approach to Computer Science Education.
ERIC Educational Resources Information Center
Bronson, Richard
1986-01-01
Suggests the use of computer simulation of continuous systems as a problem solving approach to computer languages. Outlines the procedures that the system dynamics approach employs in computer simulations. Explains the advantages of the special purpose language, DYNAMO. (ML)
Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter
2016-01-01
Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076
Two Dimensional Mechanism for Insect Hovering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jane Wang, Z.
2000-09-04
Resolved computation of two dimensional insect hovering shows for the first time that a two dimensional hovering motion can generate enough lift to support a typical insect weight. The computation reveals a two dimensional mechanism of creating a downward dipole jet of counterrotating vortices, which are formed from leading and trailing edge vortices. The vortex dynamics further elucidates the role of the phase relation between the wing translation and rotation in lift generation and explains why the instantaneous forces can reach a periodic state after only a few strokes. The model predicts the lower limits in Reynolds number and amplitudemore » above which the averaged forces are sufficient. (c) 2000 The American Physical Society.« less
On explicit algebraic stress models for complex turbulent flows
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Speziale, C. G.
1992-01-01
Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.
Use of the Computer for Research on Instruction and Student Understanding in Physics.
NASA Astrophysics Data System (ADS)
Grayson, Diane Jeanette
This dissertation describes an investigation of how the computer may be utilized to perform research on instruction and on student understanding in physics. The research was conducted within three content areas: kinematics, waves and dynamics. The main focus of the research on instruction was the determination of factors needed for a computer program to be instructionally effective. The emphasis in the research on student understanding was the identification of specific conceptual and reasoning difficulties students encounter with the subject matter. Most of the research was conducted using the computer -based interview, a technique developed during the early part of the work, conducted within the domain of kinematics. In a computer-based interview, a student makes a prediction about how a particular system will behave under given circumstances, observes a simulation of the event on a computer screen, and then is asked by an interviewer to explain any discrepancy between prediction and observation. In the course of the research, a model was developed for producing educational software. The model has three important components: (i) research on student difficulties in the content area to be addressed, (ii) observations of students using the computer program, and (iii) consequent program modification. This model was used to guide the development of an instructional computer program dealing with graphical representations of transverse pulses. Another facet of the research involved the design of a computer program explicitly for the purposes of research. A computer program was written that simulates a modified Atwood's machine. The program was than used in computer -based interviews and proved to be an effective means of probing student understanding of dynamics concepts. In order to ascertain whether or not the student difficulties identified were peculiar to the computer, laboratory-based interviews with real equipment were also conducted. The laboratory-based interviews were designed to parallel the computer-based interviews as closely as possible. The results of both types of interviews are discussed in detail. The dissertation concludes with a discussion of some of the benefits of using the computer in physics instruction and physics education research. Attention is also drawn to some of the limitations of the computer as a research instrument or instructional device.
Kinetic modeling of cell metabolism for microbial production.
Costa, Rafael S; Hartmann, Andras; Vinga, Susana
2016-02-10
Kinetic models of cellular metabolism are important tools for the rational design of metabolic engineering strategies and to explain properties of complex biological systems. The recent developments in high-throughput experimental data are leading to new computational approaches for building kinetic models of metabolism. Herein, we briefly survey the available databases, standards and software tools that can be applied for kinetic models of metabolism. In addition, we give an overview about recently developed ordinary differential equations (ODE)-based kinetic models of metabolism and some of the main applications of such models are illustrated in guiding metabolic engineering design. Finally, we review the kinetic modeling approaches of large-scale networks that are emerging, discussing their main advantages, challenges and limitations. Copyright © 2015 Elsevier B.V. All rights reserved.
Computational Psychiatry: towards a mathematically informed understanding of mental illness
Huys, Quentin J M; Roiser, Jonathan P
2016-01-01
Computational Psychiatry aims to describe the relationship between the brain's neurobiology, its environment and mental symptoms in computational terms. In so doing, it may improve psychiatric classification and the diagnosis and treatment of mental illness. It can unite many levels of description in a mechanistic and rigorous fashion, while avoiding biological reductionism and artificial categorisation. We describe how computational models of cognition can infer the current state of the environment and weigh up future actions, and how these models provide new perspectives on two example disorders, depression and schizophrenia. Reinforcement learning describes how the brain can choose and value courses of actions according to their long-term future value. Some depressive symptoms may result from aberrant valuations, which could arise from prior beliefs about the loss of agency (‘helplessness’), or from an inability to inhibit the mental exploration of aversive events. Predictive coding explains how the brain might perform Bayesian inference about the state of its environment by combining sensory data with prior beliefs, each weighted according to their certainty (or precision). Several cortical abnormalities in schizophrenia might reduce precision at higher levels of the inferential hierarchy, biasing inference towards sensory data and away from prior beliefs. We discuss whether striatal hyperdopaminergia might have an adaptive function in this context, and also how reinforcement learning and incentive salience models may shed light on the disorder. Finally, we review some of Computational Psychiatry's applications to neurological disorders, such as Parkinson's disease, and some pitfalls to avoid when applying its methods. PMID:26157034
Garrido, Marta I; Rowe, Elise G; Halász, Veronika; Mattingley, Jason B
2018-05-01
Predictive coding posits that the human brain continually monitors the environment for regularities and detects inconsistencies. It is unclear, however, what effect attention has on expectation processes, as there have been relatively few studies and the results of these have yielded contradictory findings. Here, we employed Bayesian model comparison to adjudicate between 2 alternative computational models. The "Opposition" model states that attention boosts neural responses equally to predicted and unpredicted stimuli, whereas the "Interaction" model assumes that attentional boosting of neural signals depends on the level of predictability. We designed a novel, audiospatial attention task that orthogonally manipulated attention and prediction by playing oddball sequences in either the attended or unattended ear. We observed sensory prediction error responses, with electroencephalography, across all attentional manipulations. Crucially, posterior probability maps revealed that, overall, the Opposition model better explained scalp and source data, suggesting that attention boosts responses to predicted and unpredicted stimuli equally. Furthermore, Dynamic Causal Modeling showed that these Opposition effects were expressed in plastic changes within the mismatch negativity network. Our findings provide empirical evidence for a computational model of the opposing interplay of attention and expectation in the brain.
Building Blocks for Reliable Complex Nonlinear Numerical Simulations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi N. (Technical Monitor)
2002-01-01
This talk describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.
Building Blocks for Reliable Complex Nonlinear Numerical Simulations. Chapter 2
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.
Computational models of spatial updating in peri-saccadic perception
Hamker, Fred H.; Zirnsak, Marc; Ziesche, Arnold; Lappe, Markus
2011-01-01
Perceptual phenomena that occur around the time of a saccade, such as peri-saccadic mislocalization or saccadic suppression of displacement, have often been linked to mechanisms of spatial stability. These phenomena are usually regarded as errors in processes of trans-saccadic spatial transformations and they provide important tools to study these processes. However, a true understanding of the underlying brain processes that participate in the preparation for a saccade and in the transfer of information across it requires a closer, more quantitative approach that links different perceptual phenomena with each other and with the functional requirements of ensuring spatial stability. We review a number of computational models of peri-saccadic spatial perception that provide steps in that direction. Although most models are concerned with only specific phenomena, some generalization and interconnection between them can be obtained from a comparison. Our analysis shows how different perceptual effects can coherently be brought together and linked back to neuronal mechanisms on the way to explaining vision across saccades. PMID:21242143
Crisp, Kevin M
2009-03-01
Sensitization of the defensive shortening reflex in the leech has been linked to a segmentally repeated tri-synaptic positive feedback loop. Serotonin from the R-cell enhances S-cell excitability, S-cell impulses cross an electrical synapse into the C-interneuron, and the C-interneuron excites the R-cell via a glutamatergic synapse. The C-interneuron has two unusual characteristics. First, impulses take longer to propagate from the S soma to the C soma than in the reverse direction. Second, impulses recorded from the electrically unexcitable C soma vary in amplitude when extracellular divalent cation concentrations are elevated, with smaller impulses failing to induce synaptic potentials in the R-cell. A compartmental, computational model was developed to test the sufficiency of multiple, independent spike initiation zones in the C-interneuron to explain these observations. The model displays asymmetric delays in impulse propagation across the S-C electrical synapse and graded impulse amplitudes in the C-interneuron in simulated high divalent cation concentrations.
NASA Technical Reports Server (NTRS)
Butler, C. F.
1979-01-01
A computer sensitivity analysis was performed to determine the uncertainties involved in the calculation of volcanic aerosol dispersion in the stratosphere using a 2 dimensional model. The Fuego volcanic event of 1974 was used. Aerosol dispersion processes that were included are: transport, sedimentation, gas phase sulfur chemistry, and aerosol growth. Calculated uncertainties are established from variations in the stratospheric aerosol layer decay times at 37 latitude for each dispersion process. Model profiles are also compared with lidar measurements. Results of the computer study are quite sensitive (factor of 2) to the assumed volcanic aerosol source function and the large variations in the parameterized transport between 15 and 20 km at subtropical latitudes. Sedimentation effects are uncertain by up to a factor of 1.5 because of the lack of aerosol size distribution data. The aerosol chemistry and growth, assuming that the stated mechanisms are correct, are essentially complete in several months after the eruption and cannot explain the differences between measured and modeled results.
Navarrete, Jairo A; Dartnell, Pablo
2017-08-01
Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called "flexibility" whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena.
2017-01-01
Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called “flexibility” whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena. PMID:28841643
Carbonate aquifer of the Central Roswell Basin: recharge estimation by numerical modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehfeldt, K.R.; Gross, G.W.
The flow of ground water in the Roswell, New Mexico, Artesian Basin, has been studied since the early 1900s and varied ideas have been proposed to explain different aspects of the ground water flow system. The purpose of the present study was to delineate the spatial distribution and source, or sources, of recharge to the carbonate aquifer of the central Roswell Basin. A computer model was used to simulate ground water flow in the carbonate aquifer, beneath and west of Roswell and in the Glorieta Sandstone and Yeso Formation west of the carbonate aquifer.
Qualitative mechanism models and the rationalization of procedures
NASA Technical Reports Server (NTRS)
Farley, Arthur M.
1989-01-01
A qualitative, cluster-based approach to the representation of hydraulic systems is described and its potential for generating and explaining procedures is demonstrated. Many ideas are formalized and implemented as part of an interactive, computer-based system. The system allows for designing, displaying, and reasoning about hydraulic systems. The interactive system has an interface consisting of three windows: a design/control window, a cluster window, and a diagnosis/plan window. A qualitative mechanism model for the ORS (Orbital Refueling System) is presented to coordinate with ongoing research on this system being conducted at NASA Ames Research Center.
The effect of nonequilibrium ionization on ultraviolet line shifts in the solar transition region
NASA Technical Reports Server (NTRS)
Spadaro, D.; Noci, G.; Zappala, R. A.; Antiochos, S. K.
1990-01-01
The line profiles and wavelength positions of all the important emission lines due to carbon were computed for a variety of steady state siphon flow loop models. For the lines from the lower ionization states (C II-C IV) a preponderance of blueshifts was found, contrary to the observations. The lines from the higher ionization states can show either a net red- or blueshift, depending on the position of the loop on the solar disk. Similar results are expected for oxygen. It is concluded that the observed redshifts cannot be explained by the models proposed here.
On Taylor-Series Approximations of Residual Stress
NASA Technical Reports Server (NTRS)
Pruett, C. David
1999-01-01
Although subgrid-scale models of similarity type are insufficiently dissipative for practical applications to large-eddy simulation, in recently published a priori analyses, they perform remarkably well in the sense of correlating highly against exact residual stresses. Here, Taylor-series expansions of residual stress are exploited to explain the observed behavior and "success" of similarity models. Until very recently, little attention has been given to issues related to the convergence of such expansions. Here, we re-express the convergence criterion of Vasilyev [J. Comput. Phys., 146 (1998)] in terms of the transfer function and the wavenumber cutoff of the grid filter.
Why Hart found narrow ecospheres--a minor science mystery solved.
Levenson, Barton Paul
2015-05-01
To explain why two NASA computer simulation studies in the 1970s (Hart, 1978 , 1979 ) briefly rocked the subfield of astrobiology and SETI studies by showing very narrow habitable zones (HZs) for solar-type stars. Although other studies later supported wider HZs, it was never clear why the Hart simulations found the narrow limits they did. Investigation of the state of climate studies and radiative transfer models in the period 1960-1970 provides a likely explanation. Hart's findings were in line with earlier results, preventing him from noticing that his radiation model was inadequate.
Teaching Engineering Design Using Computer Workstations.
ERIC Educational Resources Information Center
Hodgson, J. M.
1988-01-01
Explains the use of computer workstations in Electronic Engineering and in Control and Computer Engineering. Provides an introduction; initial teaching exercises at the first year, second, and third year design, research and development; and conclusions. (YP)
Why computational models are better than verbal theories: the case of nonword repetition.
Jones, Gary; Gobet, Fernand; Freudenthal, Daniel; Watson, Sarah E; Pine, Julian M
2014-03-01
Tests of nonword repetition (NWR) have often been used to examine children's phonological knowledge and word learning abilities. However, theories of NWR primarily explain performance either in terms of phonological working memory or long-term knowledge, with little consideration of how these processes interact. One theoretical account that focuses specifically on the interaction between short-term and long-term memory is the chunking hypothesis. Chunking occurs because of repeated exposure to meaningful stimulus items, resulting in the items becoming grouped (or chunked); once chunked, the items can be represented in short-term memory using one chunk rather than one chunk per item. We tested several predictions of the chunking hypothesis by presenting 5-6-year-old children with three tests of NWR that were either high, medium, or low in wordlikeness. The results did not show strong support for the chunking hypothesis, suggesting that chunking fails to fully explain children's NWR behavior. However, simulations using a computational implementation of chunking (namely CLASSIC, or Chunking Lexical And Sub-lexical Sequences In Children) show that, when the linguistic input to 5-6-year-old children is estimated in a reasonable way, the children's data are matched across all three NWR tests. These results have three implications for the field: (a) a chunking account can explain key NWR phenomena in 5-6-year-old children; (b) tests of chunking accounts require a detailed specification both of the chunking mechanism itself and of the input on which the chunking mechanism operates; and (c) verbal theories emphasizing the role of long-term knowledge (such as chunking) are not precise enough to make detailed predictions about experimental data, but computational implementations of the theories can bridge the gap. © 2013 John Wiley & Sons Ltd.
Java Performance for Scientific Applications on LLNL Computer Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapfer, C; Wissink, A
2002-05-10
Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part ofmore » the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.« less
Chalmers, Eric; Luczak, Artur; Gruber, Aaron J.
2016-01-01
The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL) to guide “goal-directed” behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals' ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a flexible MBRL mechanism for spatial navigation that is computationally efficient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, “forward sweeps” through future spatial trajectories, and context-driven remapping of place cells. We find that a hierarchical abstraction of space greatly reduces the computational load (mental effort) required for adaptation to changing environmental conditions, and allows efficient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deficits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efficiently solve navigation and other difficult tasks. PMID:28018203
Computer Viruses: Pathology and Detection.
ERIC Educational Resources Information Center
Maxwell, John R.; Lamon, William E.
1992-01-01
Explains how computer viruses were originally created, how a computer can become infected by a virus, how viruses operate, symptoms that indicate a computer is infected, how to detect and remove viruses, and how to prevent a reinfection. A sidebar lists eight antivirus resources. (four references) (LRW)
Bethge, Anja; Schumacher, Udo
2017-01-01
Background Tumor vasculature is critical for tumor growth, formation of distant metastases and efficiency of radio- and chemotherapy treatments. However, how the vasculature itself is affected during cancer treatment regarding to the metastatic behavior has not been thoroughly investigated. Therefore, the aim of this study was to analyze the influence of hypofractionated radiotherapy and cisplatin chemotherapy on vessel tree geometry and metastasis formation in a small cell lung cancer xenograft mouse tumor model to investigate the spread of malignant cells during different treatments modalities. Methods The biological data gained during these experiments were fed into our previously developed computer model “Cancer and Treatment Simulation Tool” (CaTSiT) to model the growth of the primary tumor, its metastatic deposit and also the influence on different therapies. Furthermore, we performed quantitative histology analyses to verify our predictions in xenograft mouse tumor model. Results According to the computer simulation the number of cells engrafting must vary considerably to explain the different weights of the primary tumor at the end of the experiment. Once a primary tumor is established, the fractal dimension of its vasculature correlates with the tumor size. Furthermore, the fractal dimension of the tumor vasculature changes during treatment, indicating that the therapy affects the blood vessels’ geometry. We corroborated these findings with a quantitative histological analysis showing that the blood vessel density is depleted during radiotherapy and cisplatin chemotherapy. The CaTSiT computer model reveals that chemotherapy influences the tumor’s therapeutic susceptibility and its metastatic spreading behavior. Conclusion Using a system biological approach in combination with xenograft models and computer simulations revealed that the usage of chemotherapy and radiation therapy determines the spreading behavior by changing the blood vessel geometry of the primary tumor. PMID:29107953
NASA Astrophysics Data System (ADS)
Balog, Ivan; Tarjus, Gilles; Tissier, Matthieu
2018-03-01
We show that, contrary to previous suggestions based on computer simulations or erroneous theoretical treatments, the critical points of the random-field Ising model out of equilibrium, when quasistatically changing the applied source at zero temperature, and in equilibrium are not in the same universality class below some critical dimension dD R≈5.1 . We demonstrate this by implementing a nonperturbative functional renormalization group for the associated dynamical field theory. Above dD R, the avalanches, which characterize the evolution of the system at zero temperature, become irrelevant at large distance, and hysteresis and equilibrium critical points are then controlled by the same fixed point. We explain how to use computer simulation and finite-size scaling to check the correspondence between in and out of equilibrium criticality in a far less ambiguous way than done so far.
Combined multifrequency EPR and DFT study of dangling bonds in a-Si:H
NASA Astrophysics Data System (ADS)
Fehr, M.; Schnegg, A.; Rech, B.; Lips, K.; Astakhov, O.; Finger, F.; Pfanner, G.; Freysoldt, C.; Neugebauer, J.; Bittl, R.; Teutloff, C.
2011-12-01
Multifrequency pulsed electron paramagnetic resonance (EPR) spectroscopy using S-, X-, Q-, and W-band frequencies (3.6, 9.7, 34, and 94 GHz, respectively) was employed to study paramagnetic coordination defects in undoped hydrogenated amorphous silicon (a-Si:H). The improved spectral resolution at high magnetic field reveals a rhombic splitting of the g tensor with the following principal values: gx=2.0079, gy=2.0061, and gz=2.0034, and shows pronounced g strain, i.e., the principal values are widely distributed. The multifrequency approach furthermore yields precise 29Si hyperfine data. Density functional theory (DFT) calculations on 26 computer-generated a-Si:H dangling-bond models yielded g values close to the experimental data but deviating hyperfine interaction values. We show that paramagnetic coordination defects in a-Si:H are more delocalized than computer-generated dangling-bond defects and discuss models to explain this discrepancy.
SUPAR: Smartphone as a ubiquitous physical activity recognizer for u-healthcare services.
Fahim, Muhammad; Lee, Sungyoung; Yoon, Yongik
2014-01-01
Current generation smartphone can be seen as one of the most ubiquitous device for physical activity recognition. In this paper we proposed a physical activity recognizer to provide u-healthcare services in a cost effective manner by utilizing cloud computing infrastructure. Our model is comprised on embedded triaxial accelerometer of the smartphone to sense the body movements and a cloud server to store and process the sensory data for numerous kind of services. We compute the time and frequency domain features over the raw signals and evaluate different machine learning algorithms to identify an accurate activity recognition model for four kinds of physical activities (i.e., walking, running, cycling and hopping). During our experiments we found Support Vector Machine (SVM) algorithm outperforms for the aforementioned physical activities as compared to its counterparts. Furthermore, we also explain how smartphone application and cloud server communicate with each other.
A Cerebellar-model Associative Memory as a Generalized Random-access Memory
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1989-01-01
A versatile neural-net model is explained in terms familiar to computer scientists and engineers. It is called the sparse distributed memory, and it is a random-access memory for very long words (for patterns with thousands of bits). Its potential utility is the result of several factors: (1) a large pattern representing an object or a scene or a moment can encode a large amount of information about what it represents; (2) this information can serve as an address to the memory, and it can also serve as data; (3) the memory is noise tolerant--the information need not be exact; (4) the memory can be made arbitrarily large and hence an arbitrary amount of information can be stored in it; and (5) the architecture is inherently parallel, allowing large memories to be fast. Such memories can become important components of future computers.
Programmable energy landscapes for kinetic control of DNA strand displacement.
Machinek, Robert R F; Ouldridge, Thomas E; Haley, Natalie E C; Bath, Jonathan; Turberfield, Andrew J
2014-11-10
DNA is used to construct synthetic systems that sense, actuate, move and compute. The operation of many dynamic DNA devices depends on toehold-mediated strand displacement, by which one DNA strand displaces another from a duplex. Kinetic control of strand displacement is particularly important in autonomous molecular machinery and molecular computation, in which non-equilibrium systems are controlled through rates of competing processes. Here, we introduce a new method based on the creation of mismatched base pairs as kinetic barriers to strand displacement. Reaction rate constants can be tuned across three orders of magnitude by altering the position of such a defect without significantly changing the stabilities of reactants or products. By modelling reaction free-energy landscapes, we explore the mechanistic basis of this control mechanism. We also demonstrate that oxDNA, a coarse-grained model of DNA, is capable of accurately predicting and explaining the impact of mismatches on displacement kinetics.
ERIC Educational Resources Information Center
Hofstetter, Fred T.
Dealing exclusively with instructional computing, this paper describes how computers are delivering instruction in a wide variety of subjects to students of all ages and explains why computer-based education is currently having a profound impact on education. After a discussion of roots and origins, computer applications are described for…
Parent's Guide to Computers in Education.
ERIC Educational Resources Information Center
Moursund, David
Addressed to the parents of children taking computer courses in school, this booklet outlines the rationales for computer use in schools and explains for a lay audience the features and functions of computers. A look at the school of the future shows computers aiding the study of reading, writing, arithmetic, geography, and history. The features…
Modelling Trial-by-Trial Changes in the Mismatch Negativity
Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.
2013-01-01
The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989
Self-Associations Influence Task-Performance through Bayesian Inference
Bengtsson, Sara L.; Penny, Will D.
2013-01-01
The way we think about ourselves impacts greatly on our behavior. This paper describes a behavioral study and a computational model that shed new light on this important area. Participants were primed “clever” and “stupid” using a scrambled sentence task, and we measured the effect on response time and error-rate on a rule-association task. First, we observed a confirmation bias effect in that associations to being “stupid” led to a gradual decrease in performance, whereas associations to being “clever” did not. Second, we observed that the activated self-concepts selectively modified attention toward one’s performance. There was an early to late double dissociation in RTs in that primed “clever” resulted in RT increase following error responses, whereas primed “stupid” resulted in RT increase following correct responses. We propose a computational model of subjects’ behavior based on the logic of the experimental task that involves two processes; memory for rules and the integration of rules with subsequent visual cues. The model incorporates an adaptive decision threshold based on Bayes rule, whereby decision thresholds are increased if integration was inferred to be faulty. Fitting the computational model to experimental data confirmed our hypothesis that priming affects the memory process. This model explains both the confirmation bias and double dissociation effects and demonstrates that Bayesian inferential principles can be used to study the effect of self-concepts on behavior. PMID:23966937
Antonietti, Alberto; Casellato, Claudia; Garrido, Jesús A; Luque, Niceto R; Naveros, Francisco; Ros, Eduardo; D' Angelo, Egidio; Pedrocchi, Alessandra
2016-01-01
In this study, we defined a realistic cerebellar model through the use of artificial spiking neural networks, testing it in computational simulations that reproduce associative motor tasks in multiple sessions of acquisition and extinction. By evolutionary algorithms, we tuned the cerebellar microcircuit to find out the near-optimal plasticity mechanism parameters that better reproduced human-like behavior in eye blink classical conditioning, one of the most extensively studied paradigms related to the cerebellum. We used two models: one with only the cortical plasticity and another including two additional plasticity sites at nuclear level. First, both spiking cerebellar models were able to well reproduce the real human behaviors, in terms of both "timing" and "amplitude", expressing rapid acquisition, stable late acquisition, rapid extinction, and faster reacquisition of an associative motor task. Even though the model with only the cortical plasticity site showed good learning capabilities, the model with distributed plasticity produced faster and more stable acquisition of conditioned responses in the reacquisition phase. This behavior is explained by the effect of the nuclear plasticities, which have slow dynamics and can express memory consolidation and saving. We showed how the spiking dynamics of multiple interactive neural mechanisms implicitly drive multiple essential components of complex learning processes. This study presents a very advanced computational model, developed together by biomedical engineers, computer scientists, and neuroscientists. Since its realistic features, the proposed model can provide confirmations and suggestions about neurophysiological and pathological hypotheses and can be used in challenging clinical applications.
Ferrante, Michele; Blackwell, Kim T.; Migliore, Michele; Ascoli, Giorgio A.
2012-01-01
The identification and characterization of potential pharmacological targets in neurology and psychiatry is a fundamental problem at the intersection between medicinal chemistry and the neurosciences. Exciting new techniques in proteomics and genomics have fostered rapid progress, opening numerous questions as to the functional consequences of ligand binding at the systems level. Psycho- and neuro-active drugs typically work in nerve cells by affecting one or more aspects of electrophysiological activity. Thus, an integrated understanding of neuropharmacological agents requires bridging the gap between their molecular mechanisms and the biophysical determinants of neuronal function. Computational neuroscience and bioinformatics can play a major role in this functional connection. Robust quantitative models exist describing all major active membrane properties under endogenous and exogenous chemical control. These include voltage-dependent ionic channels (sodium, potassium, calcium, etc.), synaptic receptor channels (e.g. glutamatergic, GABAergic, cholinergic), and G protein coupled signaling pathways (protein kinases, phosphatases, and other enzymatic cascades). This brief review of neuromolecular medicine from the computational perspective provides compelling examples of how simulations can elucidate, explain, and predict the effect of chemical agonists, antagonists, and modulators in the nervous system. PMID:18855673
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Othman, M. N. K., E-mail: najibkhir86@gmail.com, E-mail: zuradzman@unimap.edu.my, E-mail: hazry@unimap.edu.my, E-mail: khairunizam@unimap.edu.my, E-mail: shahriman@unimap.edu.my, E-mail: s.yaacob@unimap.edu.my, E-mail: syedfaiz@unimap.edu.my, E-mail: abadal@unimap.edu.my; Zuradzman, M. Razlan, E-mail: najibkhir86@gmail.com, E-mail: zuradzman@unimap.edu.my, E-mail: hazry@unimap.edu.my, E-mail: khairunizam@unimap.edu.my, E-mail: shahriman@unimap.edu.my, E-mail: s.yaacob@unimap.edu.my, E-mail: syedfaiz@unimap.edu.my, E-mail: abadal@unimap.edu.my; Hazry, D., E-mail: najibkhir86@gmail.com, E-mail: zuradzman@unimap.edu.my, E-mail: hazry@unimap.edu.my, E-mail: khairunizam@unimap.edu.my, E-mail: shahriman@unimap.edu.my, E-mail: s.yaacob@unimap.edu.my, E-mail: syedfaiz@unimap.edu.my, E-mail: abadal@unimap.edu.my
2014-12-04
This paper explain the analysis of internal air flow velocity of a bladeless vertical takeoff and landing (VTOL) Micro Aerial Vehicle (MAV) hemisphere body. In mechanical design, before produce a prototype model, several analyses should be done to ensure the product's effectiveness and efficiency. There are two types of analysis method can be done in mechanical design; mathematical modeling and computational fluid dynamic. In this analysis, I used computational fluid dynamic (CFD) by using SolidWorks Flow Simulation software. The idea came through to overcome the problem of ordinary quadrotor UAV which has larger size due to using four rotors andmore » the propellers are exposed to environment. The bladeless MAV body is designed to protect all electronic parts, which means it can be used in rainy condition. It also has been made to increase the thrust produced by the ducted propeller compare to exposed propeller. From the analysis result, the air flow velocity at the ducted area increased to twice the inlet air. This means that the duct contribute to the increasing of air velocity.« less
NASA Astrophysics Data System (ADS)
Othman, M. N. K.; Zuradzman, M. Razlan; Hazry, D.; Khairunizam, Wan; Shahriman, A. B.; Yaacob, S.; Ahmed, S. Faiz; Hussain, Abadalsalam T.
2014-12-01
This paper explain the analysis of internal air flow velocity of a bladeless vertical takeoff and landing (VTOL) Micro Aerial Vehicle (MAV) hemisphere body. In mechanical design, before produce a prototype model, several analyses should be done to ensure the product's effectiveness and efficiency. There are two types of analysis method can be done in mechanical design; mathematical modeling and computational fluid dynamic. In this analysis, I used computational fluid dynamic (CFD) by using SolidWorks Flow Simulation software. The idea came through to overcome the problem of ordinary quadrotor UAV which has larger size due to using four rotors and the propellers are exposed to environment. The bladeless MAV body is designed to protect all electronic parts, which means it can be used in rainy condition. It also has been made to increase the thrust produced by the ducted propeller compare to exposed propeller. From the analysis result, the air flow velocity at the ducted area increased to twice the inlet air. This means that the duct contribute to the increasing of air velocity.
Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.
Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W
2014-11-26
Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.
1992-05-01
regression analysis. The strength of any one variable can be estimated along with the strength of the entire model in explaining the variance of percent... applicable a set of damage functions is to a particular situation. Sometimes depth- damage functions are embedded in computer programs which calculate...functions. Chapter Six concludes with recommended policies on the development and application of depth-damage functions. 5 6 CHAPTER TWO CONSTRUCTION OF
Computational cognitive modeling of the temporal dynamics of fatigue from sleep loss.
Walsh, Matthew M; Gunzelmann, Glenn; Van Dongen, Hans P A
2017-12-01
Computational models have become common tools in psychology. They provide quantitative instantiations of theories that seek to explain the functioning of the human mind. In this paper, we focus on identifying deep theoretical similarities between two very different models. Both models are concerned with how fatigue from sleep loss impacts cognitive processing. The first is based on the diffusion model and posits that fatigue decreases the drift rate of the diffusion process. The second is based on the Adaptive Control of Thought - Rational (ACT-R) cognitive architecture and posits that fatigue decreases the utility of candidate actions leading to microlapses in cognitive processing. A biomathematical model of fatigue is used to control drift rate in the first account and utility in the second. We investigated the predicted response time distributions of these two integrated computational cognitive models for performance on a psychomotor vigilance test under conditions of total sleep deprivation, simulated shift work, and sustained sleep restriction. The models generated equivalent predictions of response time distributions with excellent goodness-of-fit to the human data. More importantly, although the accounts involve different modeling approaches and levels of abstraction, they represent the effects of fatigue in a functionally equivalent way: in both, fatigue decreases the signal-to-noise ratio in decision processes and decreases response inhibition. This convergence suggests that sleep loss impairs psychomotor vigilance performance through degradation of the quality of cognitive processing, which provides a foundation for systematic investigation of the effects of sleep loss on other aspects of cognition. Our findings illustrate the value of treating different modeling formalisms as vehicles for discovery.
Petruk, Ariel A.; Bartesaghi, Silvina; Trujillo, Madia; Estrin, Darío A.; Murgida, Daniel; Kalyanaraman, Balaraman; Marti, Marcelo A.; Radi, Rafael
2012-01-01
Experimental studies in hemeproteins and model Tyr/Cys-containing peptides exposed to oxidizing and nitrating species suggest that intramolecular electron transfer (IET) between tyrosyl radicals (Tyr-O●) and Cys residues controls oxidative modification yields. The molecular basis of this IET process is not sufficiently understood with structural atomic detail. Herein, we analyzed using molecular dynamics and quantum mechanics-based computational calculations, mechanistic possibilities for the radical transfer reaction in Tyr/Cys-containing peptides in solution and correlated them with existing experimental data. Our results support that Tyr-O● to Cys radical transfer is mediated by an acid/base equilibrium that involves deprotonation of Cys to form the thiolate, followed by a likely rate-limiting transfer process to yield cysteinyl radical and a Tyr phenolate; proton uptake by Tyr completes the reaction. Both, the pKa values of the Tyr phenol and Cys thiol groups and the energetic and kinetics of the reversible IET are revealed as key physico-chemical factors. The proposed mechanism constitutes a case of sequential, acid/base equilibrium-dependent and solvent-mediated, proton-coupled electron transfer and explains the dependency of oxidative yields in Tyr/Cys peptides as a function of the number of alanine spacers. These findings contribute to explain oxidative modifications in proteins that contain sequence and/or spatially close Tyr-Cys residues. PMID:22640642
de Vries, H R; Aalders, M C G; Faber, D J; van den Wijngaard, J P H M; Nikkels, P G J; van Gemert, M J C
2006-01-01
Our aim was to show that the colour difference between brighter and darker red, occasionally observed as an oscillating boundary in the recipient and donor parts of an arterioarterial anastomosis in severe twin-twin transfusion syndrome (TTTS), is a consequence of natural differences in blood oxygenation and hematocrit developing between donor and recipient twins. As method we defined a theoretical model of the placenta with dimensions from pathology examination. From literature we determined the optical absorption and scattering properties of all tissue components, and hematocrit and oxygen saturation values for donor and recipient twins. From our placental model we simulated the spectrum of back-scattered light by standard Monte Carlo photon propagation computations and calculated the colour of chorionic arterial and venous blood vessels by applying the physics theory of colour perception. Our computations demonstrate that recipient arterial blood is somewhat brighter red than donor arterial blood. The strong colour differences seen after laser coagulation of all anastomoses but the arterioarterial were explained from an angiotensin II cut-off in the recipient due to obliteration of arteriovenous anastomoses, causing a temporary increase in recipient placental perfusion and hence in blood oxygenation. In conclusion, natural differences in recipient versus donor blood oxygen saturation and hematocrit in severe TTTS explain the observed colour differences between brighter and darker red observed in the recipient and donor parts of arterioarterial anastomoses.
Ali, Amgad Ahmed; Hashim, Abdul Manaf
2016-12-01
We demonstrate a systematic computational analysis of the measured optical and charge transport properties of the spray pyrolysis-grown ZnO nanostructures, i.e. nanosphere clusters (NSCs), nanorods (NRs) and nanowires (NWs) for the first time. The calculated absorbance spectra based on the time-dependent density functional theory (TD-DFT) shows very close similarity with the measured behaviours under UV light. The atomic models and energy level diagrams for the grown nanostructures were developed and discussed to explain the structural defects and band gap. The induced stresses in the lattices of ZnO NSCs that formed during the pyrolysis process seem to cause the narrowing of the gap between the energy levels. ZnO NWs and NRs show homogeneous distribution of the LUMO and HOMO orbitals all over the entire heterostructure. Such distribution contributes to the reduction of the band gap down to 2.8 eV, which has been confirmed to be in a good agreement with the experimental results. ZnO NWs and NRs exhibited better emission behaviours under the UV excitation as compared to ZnO NSCs and thin film as their visible range emissions are strongly quenched. Based on the electrochemical impedance measurement, the electrical models and electrostatic potential maps were developed to calculate the electron lifetime and to explain the mobility or diffusion behaviours in the grown nanostructure, respectively.
NASA Astrophysics Data System (ADS)
Ali, Amgad Ahmed; Hashim, Abdul Manaf
2016-05-01
We demonstrate a systematic computational analysis of the measured optical and charge transport properties of the spray pyrolysis-grown ZnO nanostructures, i.e. nanosphere clusters (NSCs), nanorods (NRs) and nanowires (NWs) for the first time. The calculated absorbance spectra based on the time-dependent density functional theory (TD-DFT) shows very close similarity with the measured behaviours under UV light. The atomic models and energy level diagrams for the grown nanostructures were developed and discussed to explain the structural defects and band gap. The induced stresses in the lattices of ZnO NSCs that formed during the pyrolysis process seem to cause the narrowing of the gap between the energy levels. ZnO NWs and NRs show homogeneous distribution of the LUMO and HOMO orbitals all over the entire heterostructure. Such distribution contributes to the reduction of the band gap down to 2.8 eV, which has been confirmed to be in a good agreement with the experimental results. ZnO NWs and NRs exhibited better emission behaviours under the UV excitation as compared to ZnO NSCs and thin film as their visible range emissions are strongly quenched. Based on the electrochemical impedance measurement, the electrical models and electrostatic potential maps were developed to calculate the electron lifetime and to explain the mobility or diffusion behaviours in the grown nanostructure, respectively.
Pallipurath, Anuradha R; Skelton, Jonathan M; Warren, Mark R; Kamali, Naghmeh; McArdle, Patrick; Erxleben, Andrea
2015-10-05
Understanding the polymorphism exhibited by organic active-pharmaceutical ingredients (APIs), in particular the relationships between crystal structure and the thermodynamics of polymorph stability, is vital for the production of more stable drugs and better therapeutics, and for the economics of the pharmaceutical industry in general. In this article, we report a detailed study of the structure-property relationships among the polymorphs of the model API, Sulfamerazine. Detailed experimental characterization using synchrotron radiation is complemented by computational modeling of the lattice dynamics and mechanical properties, in order to study the origin of differences in millability and to investigate the thermodynamics of the phase equilibria. Good agreement is observed between the simulated phonon spectra and mid-infrared and Raman spectra. The presence of slip planes, which are found to give rise to low-frequency lattice vibrations, explains the higher millability of Form I compared to Form II. Energy/volume curves for the three polymorphs, together with the temperature dependence of the thermodynamic free energy computed from the phonon frequencies, explains why Form II converts to Form I at high temperature, whereas Form III is a rare polymorph that is difficult to isolate. The combined experimental and theoretical approach employed here should be generally applicable to the study of other systems that exhibit polymorphism.
Ultrasound breast imaging using frequency domain reverse time migration
NASA Astrophysics Data System (ADS)
Roy, O.; Zuberi, M. A. H.; Pratt, R. G.; Duric, N.
2016-04-01
Conventional ultrasonography reconstruction techniques, such as B-mode, are based on a simple wave propagation model derived from a high frequency approximation. Therefore, to minimize model mismatch, the central frequency of the input pulse is typically chosen between 3 and 15 megahertz. Despite the increase in theoretical resolution, operating at higher frequencies comes at the cost of lower signal-to-noise ratio. This ultimately degrades the image contrast and overall quality at higher imaging depths. To address this issue, we investigate a reflection imaging technique, known as reverse time migration, which uses a more accurate propagation model for reconstruction. We present preliminary simulation results as well as physical phantom image reconstructions obtained using data acquired with a breast imaging ultrasound tomography prototype. The original reconstructions are filtered to remove low-wavenumber artifacts that arise due to the inclusion of the direct arrivals. We demonstrate the advantage of using an accurate sound speed model in the reverse time migration process. We also explain how the increase in computational complexity can be mitigated using a frequency domain approach and a parallel computing platform.
Scherbaum, Stefan; Dshemuchadse, Maja; Goschke, Thomas
2012-01-01
Temporal discounting denotes the fact that individuals prefer smaller rewards delivered sooner over larger rewards delivered later, often to a higher extent than suggested by normative economical theories. In this article, we identify three lines of research studying this phenomenon which aim (i) to describe temporal discounting mathematically, (ii) to explain observed choice behavior psychologically, and (iii) to predict the influence of specific factors on intertemporal decisions. We then opt for an approach integrating postulated mechanisms and empirical findings from these three lines of research. Our approach focuses on the dynamical properties of decision processes and is based on computational modeling. We present a dynamic connectionist model of intertemporal choice focusing on the role of self-control and time framing as two central factors determining choice behavior. Results of our simulations indicate that the two influences interact with each other, and we present experimental data supporting this prediction. We conclude that computational modeling of the decision process dynamics can advance the integration of different strands of research in intertemporal choice. PMID:23181048
Scherbaum, Stefan; Dshemuchadse, Maja; Goschke, Thomas
2012-01-01
Temporal discounting denotes the fact that individuals prefer smaller rewards delivered sooner over larger rewards delivered later, often to a higher extent than suggested by normative economical theories. In this article, we identify three lines of research studying this phenomenon which aim (i) to describe temporal discounting mathematically, (ii) to explain observed choice behavior psychologically, and (iii) to predict the influence of specific factors on intertemporal decisions. We then opt for an approach integrating postulated mechanisms and empirical findings from these three lines of research. Our approach focuses on the dynamical properties of decision processes and is based on computational modeling. We present a dynamic connectionist model of intertemporal choice focusing on the role of self-control and time framing as two central factors determining choice behavior. Results of our simulations indicate that the two influences interact with each other, and we present experimental data supporting this prediction. We conclude that computational modeling of the decision process dynamics can advance the integration of different strands of research in intertemporal choice.
NASA Technical Reports Server (NTRS)
Yu, Xiaolong; Lewis, Edwin R.
1989-01-01
It is shown that noise can be an important element in the translation of neuronal generator potentials (summed inputs) to neuronal spike trains (outputs), creating or expanding a range of amplitudes over which the spike rate is proportional to the generator potential amplitude. Noise converts the basically nonlinear operation of a spike initiator into a nearly linear modulation process. This linearization effect of noise is examined in a simple intuitive model of a static threshold and in a more realistic computer simulation of spike initiator based on the Hodgkin-Huxley (HH) model. The results are qualitatively similar; in each case larger noise amplitude results in a larger range of nearly linear modulation. The computer simulation of the HH model with noise shows linear and nonlinear features that were earlier observed in spike data obtained from the VIIIth nerve of the bullfrog. This suggests that these features can be explained in terms of spike initiator properties, and it also suggests that the HH model may be useful for representing basic spike initiator properties in vertebrates.
Computational Aspects of N-Mixture Models
Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S
2015-01-01
The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629
Liu, Chen-Guang; Li, Zhi-Yang; Hao, Yue; Xia, Juan; Bai, Feng-Wu; Mehmood, Muhammad Aamer
2018-05-01
Flocculation plays an important role in the immobilized fermentation of biofuels and biochemicals. It is essential to understand the flocculation phenomenon at physical and molecular scale; however, flocs cannot be studied directly due to fragile nature. Hence, the present study is focused on the morphological specificities of yeast flocs formation and sedimentation via the computer simulation by a single floc growth model, based on Diffusion-Limited Aggregation (DLA) model. The impact of shear force, adsorption, and cell propagation on porosity and floc size is systematically illustrated. Strong shear force and weak adsorption reduced floc size but have little impact on porosity. Besides, cell propagation concreted the compactness of flocs enabling them to gain a larger size. Later, a multiple flocs growth model is developed to explain sedimentation at various initial floc sizes. Both models exhibited qualitative agreements with available experimental data. By regulating the operation constraints during fermentation, the present study will lead to finding optimal conditions to control the floc size distribution for efficient fermentation and harvesting. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Topics in Modeling of Cochlear Dynamics: Computation, Response and Stability Analysis
NASA Astrophysics Data System (ADS)
Filo, Maurice G.
This thesis touches upon several topics in cochlear modeling. Throughout the literature, mathematical models of the cochlea vary according to the degree of biological realism to be incorporated. This thesis casts the cochlear model as a continuous space-time dynamical system using operator language. This framework encompasses a wider class of cochlear models and makes the dynamics more transparent and easier to analyze before applying any numerical method to discretize space. In fact, several numerical methods are investigated to study the computational efficiency of the finite dimensional realizations in space. Furthermore, we study the effects of the active gain perturbations on the stability of the linearized dynamics. The stability analysis is used to explain possible mechanisms underlying spontaneous otoacoustic emissions and tinnitus. Dynamic Mode Decomposition (DMD) is introduced as a useful tool to analyze the response of nonlinear cochlear models. Cochlear response features are illustrated using DMD which has the advantage of explicitly revealing the spatial modes of vibrations occurring in the Basilar Membrane (BM). Finally, we address the dynamic estimation problem of BM vibrations using Extended Kalman Filters (EKF). Due to the limitations of noninvasive sensing schemes, such algorithms are inevitable to estimate the dynamic behavior of a living cochlea.
Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.
Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan
2015-01-01
Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.
Molecular Dynamics of Hot Dense Plasmas: New Horizons
NASA Astrophysics Data System (ADS)
Graziani, Frank
2011-10-01
We describe the status of a new time-dependent simulation capability for hot dense plasmas. The backbone of this multi-institutional computational and experimental effort--the Cimarron Project--is the massively parallel molecular dynamics (MD) code ``ddcMD''. The project's focus is material conditions such as exist in inertial confinement fusion experiments, and in many stellar interiors: high temperatures, high densities, significant electromagnetic fields, mixtures of high- and low- Zelements, and non-Maxwellian particle distributions. Of particular importance is our ability to incorporate into this classical MD code key atomic, radiative, and nuclear processes, so that their interacting effects under non-ideal plasma conditions can be investigated. This talk summarizes progress in computational methodology, discusses strengths and weaknesses of quantum statistical potentials as effective interactions for MD, explains the model used for quantum events possibly occurring in a collision and highlights some significant results obtained to date. We describe the status of a new time-dependent simulation capability for hot dense plasmas. The backbone of this multi-institutional computational and experimental effort--the Cimarron Project--is the massively parallel molecular dynamics (MD) code ``ddcMD''. The project's focus is material conditions such as exist in inertial confinement fusion experiments, and in many stellar interiors: high temperatures, high densities, significant electromagnetic fields, mixtures of high- and low- Zelements, and non-Maxwellian particle distributions. Of particular importance is our ability to incorporate into this classical MD code key atomic, radiative, and nuclear processes, so that their interacting effects under non-ideal plasma conditions can be investigated. This talk summarizes progress in computational methodology, discusses strengths and weaknesses of quantum statistical potentials as effective interactions for MD, explains the model used for quantum events possibly occurring in a collision and highlights some significant results obtained to date. This work is performed under the auspices of the U. S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
2012-01-01
Background Excessive engagement in screen time has several immediate and long-term health implications among pre-school children. However, little is known about the factors that influence screen time in this age group. Therefore, the purpose of this study was to use the Ecologic Model of Sedentary Behavior as a guide to examine associations between intrapersonal, interpersonal, and physical environment factors within the home setting and screen time among pre-school children. Methods Participants were 746 pre-school children (≤ 5 years old) from the Kingston, Ontario, Canada area. From May to September, 2011, parents completed a questionnaire regarding several intrapersonal (child demographics), interpersonal (family demographics, parental cognitions, parental behavior), and physical environment (television, computer, or video games in the bedroom) factors within the home setting. Parents also reported the average amount of time per day their child spent watching television and playing video/computer games. Associations were examined using linear and logistic regression models. Results Most participants (93.7%) watched television and 37.9% played video/computer games. Several intrapersonal, interpersonal, and physical environment factors within the home setting were associated with screen time. More specifically, age, parental attitudes, parental barriers, parental descriptive norms, parental screen time, and having a television in the bedroom were positive predictors of screen time; whereas, parental education, parental income, and parental self-efficacy were negative predictors of screen time in the linear regression analysis. Collectively these variables explained 64.2% of the variance in screen time. Parental cognitive factors (self-efficacy, attitudes, barriers, descriptive norms) at the interpersonal level explained a large portion (37.9%) of this variance. Conclusions A large proportion of screen time in pre-school children was explained by factors within the home setting. Parental cognitive factors at the interpersonal level were of particular relevance. These findings suggest that interventions aiming to foster appropriate screen time habits in pre-school children may be most effective if they target parents for behavioral change. PMID:22823887
Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination
Jana, Sumitash; Gopal, Atul
2016-01-01
Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements. PMID:27784809
The Computer Revolution. An Introduction to Computers. A Good Apple Activity Book for Grades 4-8.
ERIC Educational Resources Information Center
Colgren, John
This booklet is designed to introduce computers to children. A letter to parents is provided, explaining that a unit on computers will be taught which will discuss the major parts of the computer and programming in the computer language BASIC. Suggestions for teachers provide information on starting, the binary system, base two worksheet, binary…
NASA Astrophysics Data System (ADS)
Bender, Jason D.
Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for hypersonic CFD. This research was supported by the Department of Energy Computational Science Graduate Fellowship and by the Air Force Office of Scientific Research Multidisciplinary University Research Initiative.
Suggestion of a Numerical Model for the Blood Glucose Adjustment with Ingesting a Food
NASA Astrophysics Data System (ADS)
Yamamoto, Naokatsu; Takai, Hiroshi
In this study, we present a numerical model of the time dependence of blood glucose value after ingesting a meal. Two numerical models are proposed in this paper to explain a digestion mechanism and an adjustment mechanism of blood glucose in the body, respectively. It is considered that models are exhibited by using simple equations with a transfer function and a block diagram. Additionally, the time dependence of blood glucose was measured, when subjects ingested a sucrose or a starch. As a result, it is clear that the calculated result of models using a computer can be fitted very well to the measured result of the time dependence of blood glucose. Therefore, it is considered that the digestion model and the adjustment model are useful models in order to estimate a blood glucose value after ingesting meals.
Users Guide to the JPL Doppler Gravity Database
NASA Technical Reports Server (NTRS)
Muller, P. M.; Sjogren, W. L.
1986-01-01
Local gravity accelerations and gravimetry have been determined directly from spacecraft Doppler tracking data near the Moon and various planets by the Jet Propulsion Laboratory. Researchers in many fields have an interest in planet-wide global gravimetric mapping and its applications. Many of them use their own computers in support of their studies and would benefit from being able to directly manipulate these gravity data for inclusion in their own modeling computations. Pubication of some 150 Apollo 15 subsatellite low-altitude, high-resolution, single-orbit data sets is covered. The doppler residuals with a determination of the derivative function providing line-of-sight-gravity are both listed and plotted (on microfilm), and can be ordered in computer readable forms (tape and floppy disk). The form and format of this database as well as the methods of data reduction are explained and referenced. A skeleton computer program is provided which can be modified to support re-reductions and re-formatted presentations suitable to a wide variety of research needs undertaken on mainframe or PC class microcomputers.
Workflow computing. Improving management and efficiency of pathology diagnostic services.
Buffone, G J; Moreau, D; Beck, J R
1996-04-01
Traditionally, information technology in health care has helped practitioners to collect, store, and present information and also to add a degree of automation to simple tasks (instrument interfaces supporting result entry, for example). Thus commercially available information systems do little to support the need to model, execute, monitor, coordinate, and revise the various complex clinical processes required to support health-care delivery. Workflow computing, which is already implemented and improving the efficiency of operations in several nonmedical industries, can address the need to manage complex clinical processes. Workflow computing not only provides a means to define and manage the events, roles, and information integral to health-care delivery but also supports the explicit implementation of policy or rules appropriate to the process. This article explains how workflow computing may be applied to health-care and the inherent advantages of the technology, and it defines workflow system requirements for use in health-care delivery with special reference to diagnostic pathology.
NASA Astrophysics Data System (ADS)
Flomenbom, Ophir; Castañeda-Priego, Ramón; Peeters, François
2014-11-01
In this document, we present the Special Issue's projects; these include reviews and articles about mathematical solutions and formulations of single-file dynamics (SFD), yet also its computational modeling, experimental evidence, and value in explaining real life occurrences. In particular, we introduce projects focusing on electron dynamics on liquid helium in channels with changing width, on the zig-zag configuration in files with longitudinal movement, on expanding files, on both heterogeneous and slow files, on files with external forces, and on the importance of the interaction potential shape on the particle dynamics along the file. Applications of SFD are of intrinsic value in life sciences, biophysics, physics, and materials science, since they can explain a large diversity of many-body systems, e.g., biological channels, biological motors, membranes, crowding, electron motion in proteins, etc. These systems are explained in all the projects that participate in this topical issue. This Special Issue can therefore intrigue, inspire and advance scientifically young people, yet also those scientists that actively work in this field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackeprang, Kasper; Kjaergaard, Henrik G., E-mail: hgk@chem.ku.dk; Salmi, Teemu
We describe the vibrational transitions of the donor unit in water dimer with an approach that is based on a three-dimensional local mode model. We perform a perturbative treatment of the intermolecular vibrational modes to improve the transition wavenumber of the hydrogen bonded OH-stretching transition. The model accurately predicts the transition wavenumbers of the vibrations in water dimer compared to experimental values and provides a physical picture that explains the redshift of the hydrogen bonded OH-oscillator. We find that it is unnecessary to include all six intermolecular modes in the vibrational model and that their effect can, to a goodmore » approximation, be computed using a potential energy surface calculated at a lower level electronic structure method than that used for the unperturbed model.« less
The role of multiple-scale modelling of epilepsy in seizure forecasting
Kuhlmann, Levin; Grayden, David B.; Wendling, Fabrice; Schiff, Steven J.
2014-01-01
Over the past three decades, a number of seizure prediction, or forecasting, methods have been developed. Although major achievements were accomplished regarding the statistical evaluation of proposed algorithms, it is recognized that further progress is still necessary for clinical application in patients. The lack of physiological motivation can partly explain this limitation. Therefore, a natural question is raised: can computational models of epilepsy be used to improve these methods? Here we review the literature on the multiple-scale neural modelling of epilepsy and the use of such models to infer physiological changes underlying epilepsy and epileptic seizures. We argue how these methods can be applied to advance the state-of-the-art in seizure forecasting. PMID:26035674
KIC 3240411 - the hottest known SPB star with the asymptotic g-mode period spacing
NASA Astrophysics Data System (ADS)
Szewczuk, Wojciech; Daszyńska-Daszkiewicz, Jadwiga
2018-05-01
We report the discovery of the hottest hybrid B-type pulsator, KIC 3240411, that exhibits the period spacing in the low-frequency range. This pattern is associated with asymptotic properties of high-order gravity (g-) modes. Our seismic modelling made simultaneously with the mode identification shows that dipole axisymmetric modes best fit the observations. Evolutionary models are computed with MESA code and pulsational models with the linear non-adiabatic code employing the traditional approximation to include the effects of rotation. The problem of mode excitation is discussed. We confirm that significant modification is indispensable to explain an instability of both pressure and gravity modes in the observed frequency ranges of KIC 3240411.
Grossberg, Stephen
2017-03-01
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
fMRI Analysis-by-Synthesis Reveals a Dorsal Hierarchy That Extracts Surface Slant.
Ban, Hiroshi; Welchman, Andrew E
2015-07-08
The brain's skill in estimating the 3-D orientation of viewed surfaces supports a range of behaviors, from placing an object on a nearby table, to planning the best route when hill walking. This ability relies on integrating depth signals across extensive regions of space that exceed the receptive fields of early sensory neurons. Although hierarchical selection and pooling is central to understanding of the ventral visual pathway, the successive operations in the dorsal stream are poorly understood. Here we use computational modeling of human fMRI signals to probe the computations that extract 3-D surface orientation from binocular disparity. To understand how representations evolve across the hierarchy, we developed an inference approach using a series of generative models to explain the empirical fMRI data in different cortical areas. Specifically, we simulated the responses of candidate visual processing algorithms and tested how well they explained fMRI responses. Thereby we demonstrate a hierarchical refinement of visual representations moving from the representation of edges and figure-ground segmentation (V1, V2) to spatially extensive disparity gradients in V3A. We show that responses in V3A are little affected by low-level image covariates, and have a partial tolerance to the overall depth position. Finally, we show that responses in V3A parallel perceptual judgments of slant. This reveals a relatively short computational hierarchy that captures key information about the 3-D structure of nearby surfaces, and more generally demonstrates an analysis approach that may be of merit in a diverse range of brain imaging domains. Copyright © 2015 Ban and Welchman.
Schadl, Kornél; Vassar, Rachel; Cahill-Rowley, Katelyn; Yeom, Kristin W; Stevenson, David K; Rose, Jessica
2018-01-01
Advanced neuroimaging and computational methods offer opportunities for more accurate prognosis. We hypothesized that near-term regional white matter (WM) microstructure, assessed on diffusion tensor imaging (DTI), using exhaustive feature selection with cross-validation would predict neurodevelopment in preterm children. Near-term MRI and DTI obtained at 36.6 ± 1.8 weeks postmenstrual age in 66 very-low-birth-weight preterm neonates were assessed. 60/66 had follow-up neurodevelopmental evaluation with Bayley Scales of Infant-Toddler Development, 3rd-edition (BSID-III) at 18-22 months. Linear models with exhaustive feature selection and leave-one-out cross-validation computed based on DTI identified sets of three brain regions most predictive of cognitive and motor function; logistic regression models were computed to classify high-risk infants scoring one standard deviation below mean. Cognitive impairment was predicted (100% sensitivity, 100% specificity; AUC = 1) by near-term right middle-temporal gyrus MD, right cingulate-cingulum MD, left caudate MD. Motor impairment was predicted (90% sensitivity, 86% specificity; AUC = 0.912) by left precuneus FA, right superior occipital gyrus MD, right hippocampus FA. Cognitive score variance was explained (29.6%, cross-validated Rˆ2 = 0.296) by left posterior-limb-of-internal-capsule MD, Genu RD, right fusiform gyrus AD. Motor score variance was explained (31.7%, cross-validated Rˆ2 = 0.317) by left posterior-limb-of-internal-capsule MD, right parahippocampal gyrus AD, right middle-temporal gyrus AD. Search in large DTI feature space more accurately identified neonatal neuroimaging correlates of neurodevelopment.
A normalization model suggests that attention changes the weighting of inputs between visual areas
Cohen, Marlene R.
2017-01-01
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1–MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations. PMID:28461501
A normalization model suggests that attention changes the weighting of inputs between visual areas.
Ruff, Douglas A; Cohen, Marlene R
2017-05-16
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.
Macroscopic Quantum Phase-Locking Model for the Quantum Hall = Effect
NASA Astrophysics Data System (ADS)
Wang, Te-Chun; Gou, Yih-Shun
1997-08-01
A macroscopic model of nonlinear dissipative phase-locking between a Josephson-like frequency and a macroscopic electron wave frequency is proposed to explain the Quantum Hall Effect. It is well known that a r.f-biased Josephson junction displays a collective phase-locking behavior which can be described by a non-autonomous second order equation or an equivalent 2+1-dimensional dynamical system. Making a direct analogy between the QHE and the Josephson system, this report proposes a computer-solving nonlinear dynamical model for the quantization of the Hall resistance. In this model, the Hall voltage is assumed to be proportional to a Josephson-like frequency and the Hall current is assumed related to a coherent electron wave frequency. The Hall resistance is shown to be quantized in units of the fine structure constant as the ratio of these two frequencies are locked into a rational winding number. To explain the sample-width dependence of the critical current, the 2DEG under large applied current is further assumed to develop a Josephson-like junction array in which all Josephson-like frequencies are synchronized. Other remarkable features of the QHE such as the resistance fluctuation and the even-denominator states are also discussed within this picture.
ERIC Educational Resources Information Center
Ranade, Sanjay; Schraeder, Jeff
1991-01-01
Presents an overview of the mass storage market and discusses mass storage systems as part of computer networks. Systems for personal computers, workstations, minicomputers, and mainframe computers are described; file servers are explained; system integration issues are raised; and future possibilities are suggested. (LRW)
Topics in Computer Literacy as Elements of Two Introductory College Mathematics Courses.
ERIC Educational Resources Information Center
Spresser, Diane M.
1986-01-01
Explains the integrated approach implemented by James Madison University, Virginia, in enhancing computer literacy. Reviews the changes in the mathematics courses and provides topical listings and outlines of the courses that emphasize computer applications. (ML)
Artificial Intelligence and the Teaching of Reading and Writing by Computers.
ERIC Educational Resources Information Center
Balajthy, Ernest
1985-01-01
Discusses how computers can "converse" with students for teaching purposes, demonstrates how these interactions are becoming more complex, and explains how the computer's role is becoming more "human" in giving intelligent responses to students. (HOD)
Model-based influences on humans’ choices and striatal prediction errors
Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.
2011-01-01
Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563
ERIC Educational Resources Information Center
Lilly, Edward R.
Designed to assist teachers and administrators approaching the subject of computers for the first time to acquire a feel for computer terminology, this document presents a computer term glossary on three levels. (1) The terms most frequently used, called a "basic vocabulary," are presented first in three paragraphs which explain their meanings:…
Providing Computer Conferencing Opportunities for Minority Students and Measuring Results.
ERIC Educational Resources Information Center
Schwalm, Karen T.
This paper reviews the research on the effects of differential computer background on the short- and long-range success of minority students, identifies some strategies Glendale Community College (Arizona) has used to encourage minority students' use of computing, specifically computer conferencing, and explains the measures constructed to track…
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Feature-based data assimilation in geophysics
NASA Astrophysics Data System (ADS)
Morzfeld, Matthias; Adams, Jesse; Lunderman, Spencer; Orozco, Rafael
2018-05-01
Many applications in science require that computational models and data be combined. In a Bayesian framework, this is usually done by defining likelihoods based on the mismatch of model outputs and data. However, matching model outputs and data in this way can be unnecessary or impossible. For example, using large amounts of steady state data is unnecessary because these data are redundant. It is numerically difficult to assimilate data in chaotic systems. It is often impossible to assimilate data of a complex system into a low-dimensional model. As a specific example, consider a low-dimensional stochastic model for the dipole of the Earth's magnetic field, while other field components are ignored in the model. The above issues can be addressed by selecting features of the data, and defining likelihoods based on the features, rather than by the usual mismatch of model output and data. Our goal is to contribute to a fundamental understanding of such a feature-based approach that allows us to assimilate selected aspects of data into models. We also explain how the feature-based approach can be interpreted as a method for reducing an effective dimension and derive new noise models, based on perturbed observations, that lead to computationally efficient solutions. Numerical implementations of our ideas are illustrated in four examples.
Patel, Vimla L; Arocha, José F; Kushniruk, André W
2002-02-01
The aim of this paper is to examine knowledge organization and reasoning strategies involved in physician-patient communication and to consider how these are affected by the use of computer tools, in particular, electronic medical record (EMR) systems. In the first part of the paper, we summarize results from a study in which patients were interviewed before their interactions with physicians and where physician-patient interactions were recorded and analyzed to evaluate patients' and physicians' understanding of the patient problem. We give a detailed presentation of one of such interaction, with characterizations of physician and patient models. In a second set of studies, the contents of both paper and EMRs were compared and in addition, physician-patient interactions (involving the use of EMR technology) were video recorded and analyzed to assess physicians' information gathering and knowledge organization for medical decision-making. Physicians explained the patient problems in terms of causal pathophysiological knowledge underlying the disease (disease model), whereas patients explained them in terms of narrative structures of illness (illness model). The data-driven nature of the traditional physician-patient interaction allows physicians to capture the temporal flow of events and to document key aspects of the patients' narratives. Use of electronic medical records was found to influence the way patient data were gathered, resulting in information loss and disruption of temporal sequence of events in assessing patient problem. The physician-patient interview allows physicians to capture crucial aspects of the patient's illness model, which are necessary for understanding the problem from the patients' perspective. Use of computer-based patient record technology may lead to a loss of this relevant information. As a consequence, designers of such systems should take into account information relevant to the patient comprehension of medical problems, which will influence their compliance.
A disassembly-driven mechanism explains F-actin-mediated chromosome transport in starfish oocytes
Bun, Philippe; Dmitrieff, Serge; Belmonte, Julio M
2018-01-01
While contraction of sarcomeric actomyosin assemblies is well understood, this is not the case for disordered networks of actin filaments (F-actin) driving diverse essential processes in animal cells. For example, at the onset of meiosis in starfish oocytes a contractile F-actin network forms in the nuclear region transporting embedded chromosomes to the assembling microtubule spindle. Here, we addressed the mechanism driving contraction of this 3D disordered F-actin network by comparing quantitative observations to computational models. We analyzed 3D chromosome trajectories and imaged filament dynamics to monitor network behavior under various physical and chemical perturbations. We found no evidence of myosin activity driving network contractility. Instead, our observations are well explained by models based on a disassembly-driven contractile mechanism. We reconstitute this disassembly-based contractile system in silico revealing a simple architecture that robustly drives chromosome transport to prevent aneuploidy in the large oocyte, a prerequisite for normal embryonic development. PMID:29350616
Search and retrieval of office files using dBASE 3
NASA Technical Reports Server (NTRS)
Breazeale, W. L.; Talley, C. R.
1986-01-01
Described is a method of automating the office files retrieval process using a commercially available software package (dBASE III). The resulting product is a menu-driven computer program which requires no computer skills to operate. One part of the document is written for the potential user who has minimal computer experience and uses sample menu screens to explain the program; while a second part is oriented towards the computer literate individual and includes rather detailed descriptions of the methodology and search routines. Although much of the programming techniques are explained, this document is not intended to be a tutorial on dBASE III. It is hoped that the document will serve as a stimulus for other applications of dBASE III.
The Computer and Its Functions; How to Communicate with the Computer.
ERIC Educational Resources Information Center
Ward, Peggy M.
A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…
Combining dynamical decoupling with fault-tolerant quantum computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, Hui Khoon; Preskill, John; Lidar, Daniel A.
2011-07-15
We study how dynamical decoupling (DD) pulse sequences can improve the reliability of quantum computers. We prove upper bounds on the accuracy of DD-protected quantum gates and derive sufficient conditions for DD-protected gates to outperform unprotected gates. Under suitable conditions, fault-tolerant quantum circuits constructed from DD-protected gates can tolerate stronger noise and have a lower overhead cost than fault-tolerant circuits constructed from unprotected gates. Our accuracy estimates depend on the dynamics of the bath that couples to the quantum computer and can be expressed either in terms of the operator norm of the bath's Hamiltonian or in terms of themore » power spectrum of bath correlations; we explain in particular how the performance of recursively generated concatenated pulse sequences can be analyzed from either viewpoint. Our results apply to Hamiltonian noise models with limited spatial correlations.« less
Breaking the Supermassive Black Hole Speed Limit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smidt, Joseph
A new computer simulation helps explain the existence of puzzling supermassive black holes observed in the early universe. The simulation is based on a computer code used to understand the coupling of radiation and certain materials. “Supermassive black holes have a speed limit that governs how fast and how large they can grow,” said Joseph Smidt of the Theoretical Design Division at Los Alamos National Laboratory. “The relatively recent discovery of supermassive black holes in the early development of the universe raised a fundamental question, how did they get so big so fast?” Using computer codes developed at Los Alamosmore » for modeling the interaction of matter and radiation related to the Lab’s stockpile stewardship mission, Smidt and colleagues created a simulation of collapsing stars that resulted in supermassive black holes forming in less time than expected, cosmologically speaking, in the first billion years of the universe.« less
Content-Free Computer Supports for Self-Explaining: Modifiable Typing Interface and Prompting
ERIC Educational Resources Information Center
Chou, Chih-Yueh; Liang, Hung-Ta
2009-01-01
Self-explaining, which asks students to generate explanations while reading a text, is a self-constructive activity and is helpful for students' learning. Studies have revealed that prompts by a human tutor promote students' self-explanations. However, most studies on self-explaining focus on spoken self-explanations. This study investigates the…
Caldwell, Matthew; Moroz, Tracy; Hapuarachchi, Tharindi; Bainbridge, Alan; Robertson, Nicola J; Cooper, Chris E; Tachtsidis, Ilias
2015-01-01
Hypoxia-ischaemia (HI) is a major cause of neonatal brain injury, often leading to long-term damage or death. In order to improve understanding and test new treatments, piglets are used as preclinical models for human neonates. We have extended an earlier computational model of piglet cerebral physiology for application to multimodal experimental data recorded during episodes of induced HI. The data include monitoring with near-infrared spectroscopy (NIRS) and magnetic resonance spectroscopy (MRS), and the model simulates the circulatory and metabolic processes that give rise to the measured signals. Model extensions include simulation of the carotid arterial occlusion used to induce HI, inclusion of cytoplasmic pH, and loss of metabolic function due to cell death. Model behaviour is compared to data from two piglets, one of which recovered following HI while the other did not. Behaviourally-important model parameters are identified via sensitivity analysis, and these are optimised to simulate the experimental data. For the non-recovering piglet, we investigate several state changes that might explain why some MRS and NIRS signals do not return to their baseline values following the HI insult. We discover that the model can explain this failure better when we include, among other factors such as mitochondrial uncoupling and poor cerebral blood flow restoration, the death of around 40% of the brain tissue.
Bainbridge, Alan; Robertson, Nicola J.; Cooper, Chris E.
2015-01-01
Hypoxia-ischaemia (HI) is a major cause of neonatal brain injury, often leading to long-term damage or death. In order to improve understanding and test new treatments, piglets are used as preclinical models for human neonates. We have extended an earlier computational model of piglet cerebral physiology for application to multimodal experimental data recorded during episodes of induced HI. The data include monitoring with near-infrared spectroscopy (NIRS) and magnetic resonance spectroscopy (MRS), and the model simulates the circulatory and metabolic processes that give rise to the measured signals. Model extensions include simulation of the carotid arterial occlusion used to induce HI, inclusion of cytoplasmic pH, and loss of metabolic function due to cell death. Model behaviour is compared to data from two piglets, one of which recovered following HI while the other did not. Behaviourally-important model parameters are identified via sensitivity analysis, and these are optimised to simulate the experimental data. For the non-recovering piglet, we investigate several state changes that might explain why some MRS and NIRS signals do not return to their baseline values following the HI insult. We discover that the model can explain this failure better when we include, among other factors such as mitochondrial uncoupling and poor cerebral blood flow restoration, the death of around 40% of the brain tissue. PMID:26445281
Modelling individual difference in visual categorization.
Shen, Jianhong; Palmeri, Thomas J
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.
Modelling individual difference in visual categorization
Shen, Jianhong; Palmeri, Thomas J.
2016-01-01
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization. PMID:28154496
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang
2018-01-01
Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.
Habenschuss, Stefan; Hsieh, Michael
2018-01-01
Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations. PMID:29696150
A model of growth restraints to explain the development and evolution of tooth shapes in mammals.
Osborn, Jeffrey W
2008-12-07
The problem investigated here is control of the development of tooth shape. Cells at the growing soft tissue interface between the ectoderm and mesoderm in a tooth anlage are observed to buckle and fold into a template for the shape of the tooth crown. The final shape is created by enamel secreted onto the folds. The pattern in which the folds develop is generally explained as a response to the pattern in which genes are locally expressed at the interface. This congruence leaves the problem of control unanswered because it does not explain how either pattern is controlled. Obviously, cells are subject to Newton's laws of motion so that mechanical forces and constraints must ultimately cause the movements of cells during tooth morphogenesis. A computer model is used to test the hypothesis that directional resistances to growth of the epithelial part of the interface could account for the shape into which the interface folds. The model starts with a single epithelial cell whose growth is constrained by 4 constant directional resistances (anterior, posterior, medial and lateral). The constraints force the growing epithelium to buckle and fold. By entering into the model different values for these constraints the modeled epithelium is induced to buckle and fold into the different shapes associated with the evolution of a human upper molar from that of a reptilian ancestor. The patterns and sizes of cusps and the sequences in which they develop are all correctly reproduced. The model predicts the changes in the 4 directional constraints necessary to develop and evolve from one tooth shape into another. I conclude more generally expressed genes that control directional resistances to growth, not locally expressed genes, may provide the information for the shape into which a tooth develops.
Study of the structure of turbulent shear flows at supersonic speeds and high Reynolds number
NASA Technical Reports Server (NTRS)
Smits, A. J.; Bogdonoff, S. M.
1984-01-01
A major effort to improve the accuracies of turbulence measurement techniques is described including the development and testing of constant temperature hot-wire anemometers which automatically compensate for frequency responses. Calibration and data acquisition techniques for normal and inclined wires operated in the constant temperature mode, flow geometries, and physical models to explain the observed behavior of flows are discussed, as well as cooperation with computational groups in the calculation of compression corner flows.
Langley 14- by 22-foot subsonic tunnel test engineer's data acquisition and reduction manual
NASA Technical Reports Server (NTRS)
Quinto, P. Frank; Orie, Nettie M.
1994-01-01
The Langley 14- by 22-Foot Subsonic Tunnel is used to test a large variety of aircraft and nonaircraft models. To support these investigations, a data acquisition system has been developed that has both static and dynamic capabilities. The static data acquisition and reduction system is described; the hardware and software of this system are explained. The theory and equations used to reduce the data obtained in the wind tunnel are presented; the computer code is not included.
Ice interaction with offshore structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cammaert, A.B.; Muggeridge, D.B.
1988-01-01
Oil platforms and other offshore structures being built in the arctic regions must be able to withstand icebergs, ice islands, and pack ice. This reference explain the effect ice has on offshore structures and demonstrates design and construction methods that allow such structures to survive in harsh, ice-ridden environments. It analyzes the characteristics of sea ice as well as dynamic ice forces on structures. Techniques for ice modeling and field testing facilitate the design and construction of sturdy, offshore constructions. Computer programs included.
Performance monitoring can boost turboexpander efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntire, R.
1982-07-05
Focuses on the turboexpander/refrigeration system's radial expander and radial compressor. Explains that radial expander efficiency depends on mass flow rate, inlet pressure, inlet temperature, discharge pressure, gas composition, and shaft speed. Discusses quantifying the performance of the separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. Emphasizes antisurge control and modifying Q/N (flow rate/ shaft speed).
ERIC Educational Resources Information Center
Yaman, Fatma; Ayas, Alipasa
2015-01-01
Although concept maps have been used as alternative assessment methods in education, there has been an ongoing debate on how to evaluate students' concept maps. This study discusses how to evaluate students' concept maps as an assessment tool before and after 15 computer-based Predict-Observe-Explain (CB-POE) tasks related to acid-base chemistry.…
Fytas, Nikolaos G; Martín-Mayor, Víctor
2016-06-01
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)PRLTAO0031-900710.1103/PhysRevLett.110.227201] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero- and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent α of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO2) capture to predict the CO2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive and reactive massmore » transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less
Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2016-04-26
Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.