Sample records for computer modeling suggests

  1. Improving Students' Understanding of Molecular Structure through Broad-Based Use of Computer Models in the Undergraduate Organic Chemistry Lecture

    ERIC Educational Resources Information Center

    Springer, Michael T.

    2014-01-01

    Several articles suggest how to incorporate computer models into the organic chemistry laboratory, but relatively few papers discuss how to incorporate these models broadly into the organic chemistry lecture. Previous research has suggested that "manipulating" physical or computer models enhances student understanding; this study…

  2. Computational modeling of peripheral pain: a commentary.

    PubMed

    Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S

    2015-06-11

    This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.

  3. BIOCOMPUTATION: some history and prospects.

    PubMed

    Cull, Paul

    2013-06-01

    At first glance, biology and computer science are diametrically opposed sciences. Biology deals with carbon based life forms shaped by evolution and natural selection. Computer Science deals with electronic machines designed by engineers and guided by mathematical algorithms. In this brief paper, we review biologically inspired computing. We discuss several models of computation which have arisen from various biological studies. We show what these have in common, and conjecture how biology can still suggest answers and models for the next generation of computing problems. We discuss computation and argue that these biologically inspired models do not extend the theoretical limits on computation. We suggest that, in practice, biological models may give more succinct representations of various problems, and we mention a few cases in which biological models have proved useful. We also discuss the reciprocal impact of computer science on biology and cite a few significant contributions to biological science. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. An u-Service Model Based on a Smart Phone for Urban Computing Environments

    NASA Astrophysics Data System (ADS)

    Cho, Yongyun; Yoe, Hyun

    In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.

  5. Using Rasch Measurement to Develop a Computer Modeling-Based Instrument to Assess Students' Conceptual Understanding of Matter

    ERIC Educational Resources Information Center

    Wei, Silin; Liu, Xiufeng; Wang, Zuhao; Wang, Xingqiao

    2012-01-01

    Research suggests that difficulty in making connections among three levels of chemical representations--macroscopic, submicroscopic, and symbolic--is a primary reason for student alternative conceptions of chemistry concepts, and computer modeling is promising to help students make the connections. However, no computer modeling-based assessment…

  6. A computational model of selection by consequences.

    PubMed

    McDowell, J J

    2004-05-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior.

  7. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence from Word Segmentation

    ERIC Educational Resources Information Center

    Phillips, Lawrence; Pearl, Lisa

    2015-01-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…

  8. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  9. A computational model of selection by consequences.

    PubMed Central

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512

  10. Automatic computation for optimum height planning of apartment buildings to improve solar access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seong, Yoon-Bok; Kim, Yong-Yee; Seok, Ho-Tae

    2011-01-15

    The objective of this study is to suggest a mathematical model and an optimal algorithm for determining the height of apartment buildings to satisfy the solar rights of survey buildings or survey housing units. The objective is also to develop an automatic computation model for the optimum height of apartment buildings and then to clarify the performance and expected effects. To accomplish the objective of this study, the following procedures were followed: (1) The necessity of the height planning of obstruction buildings to satisfy the solar rights of survey buildings or survey housing units is demonstrated by analyzing through amore » literature review the recent trend of disputes related to solar rights and to examining the social requirements in terms of solar rights. In addition, the necessity of the automatic computation system for height planning of apartment buildings is demonstrated and a suitable analysis method for this system is chosen by investigating the characteristics of analysis methods for solar rights assessment. (2) A case study on the process of height planning of apartment buildings will be briefly described and the problems occurring in this process will then be examined carefully. (3) To develop an automatic computation model for height planning of apartment buildings, geometrical elements forming apartment buildings are defined by analyzing the geometrical characteristics of apartment buildings. In addition, design factors and regulations required in height planning of apartment buildings are investigated. Based on this knowledge, the methodology and mathematical algorithm to adjust the height of apartment buildings by automatic computation are suggested and probable problems and the ways to resolve these problems are discussed. Finally, the methodology and algorithm for the optimization are suggested. (4) Based on the suggested methodology and mathematical algorithm, the automatic computation model for optimum height of apartment buildings is developed and the developed system is verified through the application of some cases. The effects of the suggested model are then demonstrated quantitatively and qualitatively. (author)« less

  11. Modeling Education on the Real World.

    ERIC Educational Resources Information Center

    Hunter, Beverly

    1983-01-01

    Offers and discusses three suggestions to capitalize on two developments related to system dynamics modeling and simulation. These developments are a junior/senior high textbook called "Introduction to Computer Simulation" and Micro-DYNAMO, a computer simulation language for microcomputers. (Author/JN)

  12. Matter Gravitates, but Does Gravity Matter?

    ERIC Educational Resources Information Center

    Groetsch, C. W.

    2011-01-01

    The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…

  13. Modeling User Behavior in Computer Learning Tasks.

    ERIC Educational Resources Information Center

    Mantei, Marilyn M.

    Model building techniques from Artifical Intelligence and Information-Processing Psychology are applied to human-computer interface tasks to evaluate existing interfaces and suggest new and better ones. The model is in the form of an augmented transition network (ATN) grammar which is built by applying grammar induction heuristics on a sequential…

  14. Computational complexity of symbolic dynamics at the onset of chaos

    NASA Astrophysics Data System (ADS)

    Lakdawala, Porus

    1996-05-01

    In a variety of studies of dynamical systems, the edge of order and chaos has been singled out as a region of complexity. It was suggested by Wolfram, on the basis of qualitative behavior of cellular automata, that the computational basis for modeling this region is the universal Turing machine. In this paper, following a suggestion of Crutchfield, we try to show that the Turing machine model may often be too powerful as a computational model to describe the boundary of order and chaos. In particular we study the region of the first accumulation of period doubling in unimodal and bimodal maps of the interval, from the point of view of language theory. We show that in relation to the ``extended'' Chomsky hierarchy, the relevant computational model in the unimodal case is the nested stack automaton or the related indexed languages, while the bimodal case is modeled by the linear bounded automaton or the related context-sensitive languages.

  15. Computer programs for forward and inverse modeling of acoustic and electromagnetic data

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2011-01-01

    A suite of computer programs was developed by U.S. Geological Survey personnel for forward and inverse modeling of acoustic and electromagnetic data. This report describes the computer resources that are needed to execute the programs, the installation of the programs, the program designs, some tests of their accuracy, and some suggested improvements.

  16. Integration of Gravitational Torques in Cerebellar Pathways Allows for the Dynamic Inverse Computation of Vertical Pointing Movements of a Robot Arm

    PubMed Central

    Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian

    2009-01-01

    Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420

  17. Computing Support for Basic Research in Perception and Cognition

    DTIC Science & Technology

    1988-12-07

    hearing aids and cochlear implants, this suggests that certain types of proposed coding schemes, specifically those employing periodicity tuning in...developing a computer model of the interaction of declarative and procedural knowledge in skill acquisition. In the Visual Psychophysics Laboratory... Psycholinguistics - Laboratory a computer model of text comprehension and recall has been constructed and several - experiments have been completed that verify basic

  18. Breast tumor malignancy modelling using evolutionary neural logic networks.

    PubMed

    Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia

    2006-01-01

    The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.

  19. User's manual for master: Modeling of aerodynamic surfaces by 3-dimensional explicit representation. [input to three dimensional computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gibson, S. G.

    1983-01-01

    A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.

  20. Computational modeling of neural plasticity for self-organization of neural networks.

    PubMed

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

    PubMed Central

    Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang

    2013-01-01

    The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941

  2. Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations

    PubMed Central

    Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052

  3. Creating, documenting and sharing network models.

    PubMed

    Crook, Sharon M; Bednar, James A; Berger, Sandra; Cannon, Robert; Davison, Andrew P; Djurfeldt, Mikael; Eppler, Jochen; Kriener, Birgit; Furber, Steve; Graham, Bruce; Plesser, Hans E; Schwabe, Lars; Smith, Leslie; Steuber, Volker; van Albada, Sacha

    2012-01-01

    As computational neuroscience matures, many simulation environments are available that are useful for neuronal network modeling. However, methods for successfully documenting models for publication and for exchanging models and model components among these projects are still under development. Here we briefly review existing software and applications for network model creation, documentation and exchange. Then we discuss a few of the larger issues facing the field of computational neuroscience regarding network modeling and suggest solutions to some of these problems, concentrating in particular on standardized network model terminology, notation, and descriptions and explicit documentation of model scaling. We hope this will enable and encourage computational neuroscientists to share their models more systematically in the future.

  4. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  5. An empirical generative framework for computational modeling of language acquisition.

    PubMed

    Waterfall, Heidi R; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-06-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of generative grammars from raw CHILDES data and give an account of the generative performance of the acquired grammars. Next, we summarize findings from recent longitudinal and experimental work that suggests how certain statistically prominent structural properties of child-directed speech may facilitate language acquisition. We then present a series of new analyses of CHILDES data indicating that the desired properties are indeed present in realistic child-directed speech corpora. Finally, we suggest how our computational results, behavioral findings, and corpus-based insights can be integrated into a next-generation model aimed at meeting the four requirements of our modeling framework.

  6. The Human-Computer Interface and Information Literacy: Some Basics and Beyond.

    ERIC Educational Resources Information Center

    Church, Gary M.

    1999-01-01

    Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…

  7. Enhancing Human-Computer Interaction Design Education: Teaching Affordance Design for Emerging Mobile Devices

    ERIC Educational Resources Information Center

    Faiola, Anthony; Matei, Sorin Adam

    2010-01-01

    The evolution of human-computer interaction design (HCID) over the last 20 years suggests that there is a growing need for educational scholars to consider new and more applicable theoretical models of interactive product design. The authors suggest that such paradigms would call for an approach that would equip HCID students with a better…

  8. Suggestions for CAP-TSD mesh and time-step input parameters

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1991-01-01

    Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.

  9. Reverse logistics system planning for recycling computers hardware: A case study

    NASA Astrophysics Data System (ADS)

    Januri, Siti Sarah; Zulkipli, Faridah; Zahari, Siti Meriam; Shamsuri, Siti Hajar

    2014-09-01

    This paper describes modeling and simulation of reverse logistics networks for collection of used computers in one of the company in Selangor. The study focuses on design of reverse logistics network for used computers recycling operation. Simulation modeling, presented in this work allows the user to analyze the future performance of the network and to understand the complex relationship between the parties involved. The findings from the simulation suggest that the model calculates processing time and resource utilization in a predictable manner. In this study, the simulation model was developed by using Arena simulation package.

  10. Cognitive control predicts use of model-based reinforcement learning.

    PubMed

    Otto, A Ross; Skatova, Anya; Madlon-Kay, Seth; Daw, Nathaniel D

    2015-02-01

    Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information--in the service of overcoming habitual, stimulus-driven responses--in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior.

  11. Computational models of airway branching morphogenesis.

    PubMed

    Varner, Victor D; Nelson, Celeste M

    2017-07-01

    The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Beyond Computer Planning: Managing Educational Computer Innovations.

    ERIC Educational Resources Information Center

    Washington, Wenifort

    The vast underutilization of technology in educational environments suggests the need for more research to develop models to successfully adopt and diffuse computer systems in schools. Of 980 surveys mailed to various Ohio public schools, 529 were completed and returned to help determine current attitudes and perceptions of teachers and…

  13. Elementary Computer Literacy. Student Activity Handbook.

    ERIC Educational Resources Information Center

    Sather, Ruth; And Others

    This workbook of ideas and activities is designed for use in correlation with the curriculum guide "Elementary Computer Literacy," which contains the answer key and suggestions for use. The Apple II microcomputer is used as an example, but the material is adaptable to other computer models. Varied activities provide practice in drawing,…

  14. A Seminar in Mathematical Model-Building.

    ERIC Educational Resources Information Center

    Smith, David A.

    1979-01-01

    A course in mathematical model-building is described. Suggested modeling projects include: urban problems, biology and ecology, economics, psychology, games and gaming, cosmology, medicine, history, computer science, energy, and music. (MK)

  15. Computational Fluid Dynamics (CFD): Future role and requirements as viewed by an applied aerodynamicist. [computer systems design

    NASA Technical Reports Server (NTRS)

    Yoshihara, H.

    1978-01-01

    The problem of designing the wing-fuselage configuration of an advanced transonic commercial airliner and the optimization of a supercruiser fighter are sketched, pointing out the essential fluid mechanical phenomena that play an important role. Such problems suggest that for a numerical method to be useful, it must be able to treat highly three dimensional turbulent separations, flows with jet engine exhausts, and complex vehicle configurations. Weaknesses of the two principal tools of the aerodynamicist, the wind tunnel and the computer, suggest a complementing combined use of these tools, which is illustrated by the case of the transonic wing-fuselage design. The anticipated difficulties in developing an adequate turbulent transport model suggest that such an approach may have to suffice for an extended period. On a longer term, experimentation of turbulent transport in meaningful cases must be intensified to provide a data base for both modeling and theory validation purposes.

  16. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package

    PubMed Central

    2012-01-01

    Background Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. Results In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Conclusions Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org. PMID:23281941

  17. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    PubMed

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  18. Longitudinal train dynamics: an overview

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Spiryagin, Maksym; Cole, Colin

    2016-12-01

    This paper discusses the evolution of longitudinal train dynamics (LTD) simulations, which covers numerical solvers, vehicle connection systems, air brake systems, wagon dumper systems and locomotives, resistance forces and gravitational components, vehicle in-train instabilities, and computing schemes. A number of potential research topics are suggested, such as modelling of friction, polymer, and transition characteristics for vehicle connection simulations, studies of wagon dumping operations, proper modelling of vehicle in-train instabilities, and computing schemes for LTD simulations. Evidence shows that LTD simulations have evolved with computing capabilities. Currently, advanced component models that directly describe the working principles of the operation of air brake systems, vehicle connection systems, and traction systems are available. Parallel computing is a good solution to combine and simulate all these advanced models. Parallel computing can also be used to conduct three-dimensional long train dynamics simulations.

  19. Computation of turbulent flows-state-of-the-art, 1970

    NASA Technical Reports Server (NTRS)

    Reynolds, W. C.

    1972-01-01

    The state-of-the-art of turbulent flow computation is surveyed. The formulations were generalized to increase the range of their applicability, and the excitement of current debate on equation models was brought into the review. Some new ideas on the modeling of the pressure-strain term in the Reynolds stress equations are also suggested.

  20. A computational model for telomere-dependent cell-replicative aging.

    PubMed

    Portugal, R D; Land, M G P; Svaiter, B F

    2008-01-01

    Telomere shortening provides a molecular basis for the Hayflick limit. Recent data suggest that telomere shortening also influence mitotic rate. We propose a stochastic growth model of this phenomena, assuming that cell division in each time interval is a random process which probability decreases linearly with telomere shortening. Computer simulations of the proposed stochastic telomere-regulated model provides good approximation of the qualitative growth of cultured human mesenchymal stem cells.

  1. Quantitative, steady-state properties of Catania's computational model of the operant reserve.

    PubMed

    Berg, John P; McDowell, J J

    2011-05-01

    Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. In Silico Modeling: Methods and Applications toTrauma and Sepsis

    PubMed Central

    Vodovotz, Yoram; Billiar, Timothy R.

    2013-01-01

    Objective To familiarize clinicians with advances in computational disease modeling applied to trauma and sepsis. Data Sources PubMed search and review of relevant medical literature. Summary Definitions, key methods, and applications of computational modeling to trauma and sepsis are reviewed. Conclusions Computational modeling of inflammation and organ dysfunction at the cellular, organ, whole-organism, and population levels has suggested a positive feedback cycle of inflammation → damage → inflammation that manifests via organ-specific inflammatory switching networks. This structure may manifest as multi-compartment “tipping points” that drive multiple organ dysfunction. This process may be amenable to rational inflammation reprogramming. PMID:23863232

  3. Risk prediction and aversion by anterior cingulate cortex.

    PubMed

    Brown, Joshua W; Braver, Todd S

    2007-12-01

    The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.

  4. Reverse time migration by Krylov subspace reduced order modeling

    NASA Astrophysics Data System (ADS)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  5. Cloud Computing Value Chains: Understanding Businesses and Value Creation in the Cloud

    NASA Astrophysics Data System (ADS)

    Mohammed, Ashraf Bany; Altmann, Jörn; Hwang, Junseok

    Based on the promising developments in Cloud Computing technologies in recent years, commercial computing resource services (e.g. Amazon EC2) or software-as-a-service offerings (e.g. Salesforce. com) came into existence. However, the relatively weak business exploitation, participation, and adoption of other Cloud Computing services remain the main challenges. The vague value structures seem to be hindering business adoption and the creation of sustainable business models around its technology. Using an extensive analyze of existing Cloud business models, Cloud services, stakeholder relations, market configurations and value structures, this Chapter develops a reference model for value chains in the Cloud. Although this model is theoretically based on porter's value chain theory, the proposed Cloud value chain model is upgraded to fit the diversity of business service scenarios in the Cloud computing markets. Using this model, different service scenarios are explained. Our findings suggest new services, business opportunities, and policy practices for realizing more adoption and value creation paths in the Cloud.

  6. Reinforcement learning in depression: A review of computational research.

    PubMed

    Chen, Chong; Takahashi, Taiki; Nakagawa, Shin; Inoue, Takeshi; Kusumi, Ichiro

    2015-08-01

    Despite being considered primarily a mood disorder, major depressive disorder (MDD) is characterized by cognitive and decision making deficits. Recent research has employed computational models of reinforcement learning (RL) to address these deficits. The computational approach has the advantage in making explicit predictions about learning and behavior, specifying the process parameters of RL, differentiating between model-free and model-based RL, and the computational model-based functional magnetic resonance imaging and electroencephalography. With these merits there has been an emerging field of computational psychiatry and here we review specific studies that focused on MDD. Considerable evidence suggests that MDD is associated with impaired brain signals of reward prediction error and expected value ('wanting'), decreased reward sensitivity ('liking') and/or learning (be it model-free or model-based), etc., although the causality remains unclear. These parameters may serve as valuable intermediate phenotypes of MDD, linking general clinical symptoms to underlying molecular dysfunctions. We believe future computational research at clinical, systems, and cellular/molecular/genetic levels will propel us toward a better understanding of the disease. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Manipulatives and the Computer: A Powerful Partnership for Learners of All Ages.

    ERIC Educational Resources Information Center

    Perl, Teri

    1990-01-01

    Discussed is the concept of mirroring in which computer programs are used to enhance the use of mathematics manipulatives. The strengths and weaknesses of this approach are presented. The uses of the computer in modeling and as a manipulative are also described. Several software packages are suggested. (CW)

  8. Computational modelling of the impact of AIDS on business.

    PubMed

    Matthews, Alan P

    2007-07-01

    An overview of computational modelling of the impact of AIDS on business in South Africa, with a detailed description of the AIDS Projection Model (APM) for companies, developed by the author, and suggestions for further work. Computational modelling of the impact of AIDS on business in South Africa requires modelling of the epidemic as a whole, and of its impact on a company. This paper gives an overview of epidemiological modelling, with an introduction to the Actuarial Society of South Africa (ASSA) model, the most widely used such model for South Africa. The APM produces projections of HIV prevalence, new infections, and AIDS mortality on a company, based on the anonymous HIV testing of company employees, and projections from the ASSA model. A smoothed statistical model of the prevalence test data is computed, and then the ASSA model projection for each category of employees is adjusted so that it matches the measured prevalence in the year of testing. FURTHER WORK: Further techniques that could be developed are microsimulation (representing individuals in the computer), scenario planning for testing strategies, and models for the business environment, such as models of entire sectors, and mapping of HIV prevalence in time and space, based on workplace and community data.

  9. A Suggested Model for a Working Cyberschool.

    ERIC Educational Resources Information Center

    Javid, Mahnaz A.

    2000-01-01

    Suggests a model for a working cyberschool based on a case study of Kamiak Cyberschool (Washington), a technology-driven public high school. Topics include flexible hours; one-to-one interaction with teachers; a supportive school environment; use of computers, interactive media, and online resources; and self-paced, project-based learning.…

  10. EUV/soft x-ray spectra for low B neutron stars

    NASA Technical Reports Server (NTRS)

    Romani, Roger W.; Rajagopal, Mohan; Rogers, Forrest J.; Iglesias, Carlos A.

    1995-01-01

    Recent ROSAT and EUVE detections of spin-powered neutron stars suggest that many emit 'thermal' radiation, peaking in the EUV/soft X-ray band. These data constrain the neutron stars' thermal history, but interpretation requires comparison with model atmosphere computations, since emergent spectra depend strongly on the surface composition and magnetic field. As recent opacity computations show substantial change to absorption cross sections at neutron star photospheric conditions, we report here on new model atmosphere computations employing such data. The results are compared with magnetic atmosphere models and applied to PSR J0437-4715, a low field neutron star.

  11. Computational neuropharmacology: dynamical approaches in drug discovery.

    PubMed

    Aradi, Ildiko; Erdi, Péter

    2006-05-01

    Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.

  12. Model-based predictions for dopamine.

    PubMed

    Langdon, Angela J; Sharpe, Melissa J; Schoenbaum, Geoffrey; Niv, Yael

    2018-04-01

    Phasic dopamine responses are thought to encode a prediction-error signal consistent with model-free reinforcement learning theories. However, a number of recent findings highlight the influence of model-based computations on dopamine responses, and suggest that dopamine prediction errors reflect more dimensions of an expected outcome than scalar reward value. Here, we review a selection of these recent results and discuss the implications and complications of model-based predictions for computational theories of dopamine and learning. Copyright © 2017. Published by Elsevier Ltd.

  13. Cognitive Control Predicts Use of Model-Based Reinforcement-Learning

    PubMed Central

    Otto, A. Ross; Skatova, Anya; Madlon-Kay, Seth; Daw, Nathaniel D.

    2015-01-01

    Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information—in the service of overcoming habitual, stimulus-driven responses—in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior. PMID:25170791

  14. Dynamic mechanistic explanation: computational modeling of circadian rhythms as an exemplar for cognitive science.

    PubMed

    Bechtel, William; Abrahamsen, Adele

    2010-09-01

    We consider computational modeling in two fields: chronobiology and cognitive science. In circadian rhythm models, variables generally correspond to properties of parts and operations of the responsible mechanism. A computational model of this complex mechanism is grounded in empirical discoveries and contributes a more refined understanding of the dynamics of its behavior. In cognitive science, on the other hand, computational modelers typically advance de novo proposals for mechanisms to account for behavior. They offer indirect evidence that a proposed mechanism is adequate to produce particular behavioral data, but typically there is no direct empirical evidence for the hypothesized parts and operations. Models in these two fields differ in the extent of their empirical grounding, but they share the goal of achieving dynamic mechanistic explanation. That is, they augment a proposed mechanistic explanation with a computational model that enables exploration of the mechanism's dynamics. Using exemplars from circadian rhythm research, we extract six specific contributions provided by computational models. We then examine cognitive science models to determine how well they make the same types of contributions. We suggest that the modeling approach used in circadian research may prove useful in cognitive science as researchers develop procedures for experimentally decomposing cognitive mechanisms into parts and operations and begin to understand their nonlinear interactions.

  15. Predictive representations can link model-based reinforcement learning to model-free mechanisms.

    PubMed

    Russek, Evan M; Momennejad, Ida; Botvinick, Matthew M; Gershman, Samuel J; Daw, Nathaniel D

    2017-09-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.

  16. Predictive representations can link model-based reinforcement learning to model-free mechanisms

    PubMed Central

    Botvinick, Matthew M.

    2017-01-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation. PMID:28945743

  17. Recruitment of Foreigners in the Market for Computer Scientists in the United States

    PubMed Central

    Bound, John; Braga, Breno; Golden, Joseph M.

    2016-01-01

    We present and calibrate a dynamic model that characterizes the labor market for computer scientists. In our model, firms can recruit computer scientists from recently graduated college students, from STEM workers working in other occupations or from a pool of foreign talent. Counterfactual simulations suggest that wages for computer scientists would have been 2.8–3.8% higher, and the number of Americans employed as computers scientists would have been 7.0–13.6% higher in 2004 if firms could not hire more foreigners than they could in 1994. In contrast, total CS employment would have been 3.8–9.0% lower, and consequently output smaller. PMID:27170827

  18. Neural Network Optimization of Ligament Stiffnesses for the Enhanced Predictive Ability of a Patient-Specific, Computational Foot/Ankle Model.

    PubMed

    Chande, Ruchi D; Wayne, Jennifer S

    2017-09-01

    Computational models of diarthrodial joints serve to inform the biomechanical function of these structures, and as such, must be supplied appropriate inputs for performance that is representative of actual joint function. Inputs for these models are sourced from both imaging modalities as well as literature. The latter is often the source of mechanical properties for soft tissues, like ligament stiffnesses; however, such data are not always available for all the soft tissues nor is it known for patient-specific work. In the current research, a method to improve the ligament stiffness definition for a computational foot/ankle model was sought with the greater goal of improving the predictive ability of the computational model. Specifically, the stiffness values were optimized using artificial neural networks (ANNs); both feedforward and radial basis function networks (RBFNs) were considered. Optimal networks of each type were determined and subsequently used to predict stiffnesses for the foot/ankle model. Ultimately, the predicted stiffnesses were considered reasonable and resulted in enhanced performance of the computational model, suggesting that artificial neural networks can be used to optimize stiffness inputs.

  19. Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models

    PubMed Central

    Burr, Tom

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668

  20. Selecting summary statistics in approximate Bayesian computation for calibrating stochastic models.

    PubMed

    Burr, Tom; Skurikhin, Alexei

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the "go-to" option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.

  1. Bimolecular dynamics by computer analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eilbeck, J.C.; Lomdahl, P.S.; Scott, A.C.

    1984-01-01

    As numerical tools (computers and display equipment) become more powerful and the atomic structures of important biological molecules become known, the importance of detailed computation of nonequilibrium biomolecular dynamics increases. In this manuscript we report results from a well developed study of the hydrogen bonded polypeptide crystal acetanilide, a model protein. Directions for future research are suggested. 9 references, 6 figures.

  2. Teacher's Guide for Computational Models of Animal Behavior: A Computer-Based Curriculum Unit to Accompany the Elementary Science Study Guide "Behavior of Mealworms." Artificial Intelligence Memo No. 432.

    ERIC Educational Resources Information Center

    Abelson, Hal; Goldenberg, Paul

    This experimental curriculum unit suggests how dramatic innovations in classroom content may be achieved through use of computers. The computational perspective is viewed as one which can enrich and transform traditional curricula, act as a focus for integrating insights from diverse disciplines, and enable learning to become more active and…

  3. Chromatin Computation

    PubMed Central

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  4. Alterations in choice behavior by manipulations of world model.

    PubMed

    Green, C S; Benson, C; Kersten, D; Schrater, P

    2010-09-14

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

  5. Alterations in choice behavior by manipulations of world model

    PubMed Central

    Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.

    2010-01-01

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507

  6. Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.

    PubMed

    Vassena, Eliana; Holroyd, Clay B; Alexander, William H

    2017-01-01

    In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.

  7. LORAN-C LATITUDE-LONGITUDE CONVERSION AT SEA: PROGRAMMING CONSIDERATIONS.

    USGS Publications Warehouse

    McCullough, James R.; Irwin, Barry J.; Bowles, Robert M.

    1985-01-01

    Comparisons are made of the precision of arc-length routines as computer precision is reduced. Overland propagation delays are discussed and illustrated with observations from offshore New England. Present practice of LORAN-C error budget modeling is then reviewed with the suggestion that additional terms be considered in future modeling. Finally, some detailed numeric examples are provided to help with new computer program checkout.

  8. Molecular Modeling in Drug Design for the Development of Organophosphorus Antidotes/Prophylactics.

    DTIC Science & Technology

    1986-06-01

    multidimensional statistical QSAR analysis techniques to suggest new structures for synthesis and evaluation. C. Application of quantum chemical techniques to...compounds for synthesis and testing for antidotal potency. E. Use of computer-assisted methods to determine the steric constraints at the active site...modeling techniques to model the enzyme acetylcholinester-se. H. Suggestion of some novel compounds for synthesis and testing for reactivating

  9. Model weights and the foundations of multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2006-01-01

    Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike?s information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.

  10. Application of SLURM, BOINC, and GlusterFS as Software System for Sustainable Modeling and Data Analytics

    NASA Astrophysics Data System (ADS)

    Kashansky, Vladislav V.; Kaftannikov, Igor L.

    2018-02-01

    Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.

  11. Wusor II: A Computer Aided Instruction Program with Student Modelling Capabilities. AI Memo 417.

    ERIC Educational Resources Information Center

    Carr, Brian

    Wusor II is the second intelligent computer aided instruction (ICAI) program that has been developed to monitor the progress of, and offer suggestions to, students playing Wumpus, a computer game designed to teach logical thinking and problem solving. From the earlier efforts with Wusor I, it was possible to produce a rule-based expert which…

  12. Application of artificial neural networks to gaming

    NASA Astrophysics Data System (ADS)

    Baba, Norio; Kita, Tomio; Oda, Kazuhiro

    1995-04-01

    Recently, neural network technology has been applied to various actual problems. It has succeeded in producing a large number of intelligent systems. In this article, we suggest that it could be applied to the field of gaming. In particular, we suggest that the neural network model could be used to mimic players' characters. Several computer simulation results using a computer gaming system which is a modified version of the COMMONS GAME confirm our idea.

  13. Model falsifiability and climate slow modes

    NASA Astrophysics Data System (ADS)

    Essex, Christopher; Tsonis, Anastasios A.

    2018-07-01

    The most advanced climate models are actually modified meteorological models attempting to capture climate in meteorological terms. This seems a straightforward matter of raw computing power applied to large enough sources of current data. Some believe that models have succeeded in capturing climate in this manner. But have they? This paper outlines difficulties with this picture that derive from the finite representation of our computers, and the fundamental unavailability of future data instead. It suggests that alternative windows onto the multi-decadal timescales are necessary in order to overcome the issues raised for practical problems of prediction.

  14. Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Hinkley, Jeffrey A.

    2003-01-01

    The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.

  15. Space lab system analysis

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Rives, T. B.

    1987-01-01

    An analytical analysis of the HOSC Generic Peripheral processing system was conducted. The results are summarized and they indicate that the maximum delay in performing screen change requests should be less than 2.5 sec., occurring for a slow VAX host to video screen I/O rate of 50 KBps. This delay is due to the average I/O rate from the video terminals to their host computer. Software structure of the main computers and the host computers will have greater impact on screen change or refresh response times. The HOSC data system model was updated by a newly coded PASCAL based simulation program which was installed on the HOSC VAX system. This model is described and documented. Suggestions are offered to fine tune the performance of the ETERNET interconnection network. Suggestions for using the Nutcracker by Excelan to trace itinerate packets which appear on the network from time to time were offered in discussions with the HOSC personnel. Several visits to the HOSC facility were to install and demonstrate the simulation model.

  16. How social and non-social information influence classification decisions: A computational modelling approach.

    PubMed

    Puskaric, Marin; von Helversen, Bettina; Rieskamp, Jörg

    2017-08-01

    Social information such as observing others can improve performance in decision making. In particular, social information has been shown to be useful when finding the best solution on one's own is difficult, costly, or dangerous. However, past research suggests that when making decisions people do not always consider other people's behaviour when it is at odds with their own experiences. Furthermore, the cognitive processes guiding the integration of social information with individual experiences are still under debate. Here, we conducted two experiments to test whether information about other persons' behaviour influenced people's decisions in a classification task. Furthermore, we examined how social information is integrated with individual learning experiences by testing different computational models. Our results show that social information had a small but reliable influence on people's classifications. The best computational model suggests that in categorization people first make up their own mind based on the non-social information, which is then updated by the social information.

  17. The Evolution of a Connectionist Model of Situated Human Language Understanding

    NASA Astrophysics Data System (ADS)

    Mayberry, Marshall R.; Crocker, Matthew W.

    The Adaptive Mechanisms in Human Language Processing (ALPHA) project features both experimental and computational tracks designed to complement each other in the investigation of the cognitive mechanisms that underlie situated human utterance processing. The models developed in the computational track replicate results obtained in the experimental track and, in turn, suggest further experiments by virtue of behavior that arises as a by-product of their operation.

  18. In Praise of Numerical Computation

    NASA Astrophysics Data System (ADS)

    Yap, Chee K.

    Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.

  19. Catalytic ignition model in a monolithic reactor with in-depth reaction

    NASA Technical Reports Server (NTRS)

    Tien, Ta-Ching; Tien, James S.

    1990-01-01

    Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.

  20. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  1. Transportation Impact Evaluation System

    DOT National Transportation Integrated Search

    1979-11-01

    This report specifies a framework for spatial analysis and the general modelling steps required. It also suggests available urban and regional data sources, along with some typical existing urban and regional models. The goal is to develop a computer...

  2. Causal learning with local computations.

    PubMed

    Fernbach, Philip M; Sloman, Steven A

    2009-05-01

    The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require relatively small amounts of data, and need not respect normative prescriptions as inferences that are principled locally may violate those principles when combined. Over a series of 3 experiments, the authors found (a) systematic inferences from small amounts of data; (b) systematic inference of extraneous causal links; (c) influence of data presentation order on inferences; and (d) error reduction through pretraining. Without pretraining, a model based on local computations fitted data better than a Bayesian structural inference model. The data suggest that local computations serve as a heuristic for learning causal structure. Copyright 2009 APA, all rights reserved.

  3. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  4. Progress in Earth System Modeling since the ENIAC Calculation

    NASA Astrophysics Data System (ADS)

    Fung, I.

    2009-05-01

    The success of the first numerical weather prediction experiment on the ENIAC computer in 1950 was hinged on the expansion of the meteorological observing network, which led to theoretical advances in atmospheric dynamics and subsequently the implementation of the simplified equations on the computer. This paper briefly reviews the progress in Earth System Modeling and climate observations, and suggests a strategy to sustain and expand the observations needed to advance climate science and prediction.

  5. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  6. "Let's get physical": advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy.

    PubMed

    Preece, Daniel; Williams, Sarah B; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their comparative efficacies remains scarce in the literature. This study developed and evaluated the use of a physical model in demonstrating the complex spatial relationships of the equine foot. It was hypothesized that the newly developed physical model would be more effective for students to learn magnetic resonance imaging (MRI) anatomy of the foot than textbooks or computer-based 3D models. Third year veterinary medicine students were randomly assigned to one of three teaching aid groups (physical model; textbooks; 3D computer model). The comparative efficacies of the three teaching aids were assessed through students' abilities to identify anatomical structures on MR images. Overall mean MRI assessment scores were significantly higher in students utilizing the physical model (86.39%) compared with students using textbooks (62.61%) and the 3D computer model (63.68%) (P < 0.001), with no significant difference between the textbook and 3D computer model groups (P = 0.685). Student feedback was also more positive in the physical model group compared with both the textbook and 3D computer model groups. Our results suggest that physical models may hold a significant advantage over alternative learning resources in enhancing visuospatial and 3D understanding of complex anatomical architecture, and that 3D computer models have significant limitations with regards to 3D learning. © 2013 American Association of Anatomists.

  7. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    PubMed Central

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  8. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    PubMed

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  9. QM Automata: A New Class of Restricted Quantum Membrane Automata.

    PubMed

    Giannakis, Konstantinos; Singh, Alexandros; Kastampolidou, Kalliopi; Papalitsas, Christos; Andronikos, Theodore

    2017-01-01

    The term "Unconventional Computing" describes the use of non-standard methods and models in computing. It is a recently established field, with many interesting and promising results. In this work we combine notions from quantum computing with aspects of membrane computing to define what we call QM automata. Specifically, we introduce a variant of quantum membrane automata that operate in accordance with the principles of quantum computing. We explore the functionality and capabilities of the QM automata through indicative examples. Finally we suggest future directions for research on QM automata.

  10. Production of Referring Expressions for an Unknown Audience: A Computational Model of Communal Common Ground

    PubMed Central

    Kutlak, Roman; van Deemter, Kees; Mellish, Chris

    2016-01-01

    This article presents a computational model of the production of referring expressions under uncertainty over the hearer's knowledge. Although situations where the hearer's knowledge is uncertain have seldom been addressed in the computational literature, they are common in ordinary communication, for example when a writer addresses an unknown audience, or when a speaker addresses a stranger. We propose a computational model composed of three complimentary heuristics based on, respectively, an estimation of the recipient's knowledge, an estimation of the extent to which a property is unexpected, and the question of what is the optimum number of properties in a given situation. The model was tested in an experiment with human readers, in which it was compared against the Incremental Algorithm and human-produced descriptions. The results suggest that the new model outperforms the Incremental Algorithm in terms of the proportion of correctly identified entities and in terms of the perceived quality of the generated descriptions. PMID:27630592

  11. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    NASA Astrophysics Data System (ADS)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  12. A National Study of the Relationship between Home Access to a Computer and Academic Performance Scores of Grade 12 U.S. Science Students: An Analysis of the 2009 NAEP Data

    NASA Astrophysics Data System (ADS)

    Coffman, Mitchell Ward

    The purpose of this dissertation was to examine the relationship between student access to a computer at home and academic achievement. The 2009 National Assessment of Educational Progress (NAEP) dataset was probed using the National Data Explorer (NDE) to investigate correlations in the subsets of SES, Parental Education, Race, and Gender as it relates to access of a home computer and improved performance scores for U.S. public school grade 12 science students. A causal-comparative approach was employed seeking clarity on the relationship between home access and performance scores. The influence of home access cannot overcome the challenges students of lower SES face. The achievement gap, or a second digital divide, for underprivileged classes of students, including minorities does not appear to contract via student access to a home computer. Nonetheless, in tests for significance, statistically significant improvement in science performance scores was reported for those having access to a computer at home compared to those not having access. Additionally, regression models reported evidence of correlations between and among subsets of controls for the demographic factors gender, race, and socioeconomic status. Variability in these correlations was high; suggesting influence from unobserved factors may have more impact upon the dependent variable. Having access to a computer at home increases performance scores for grade 12 general science students of all races, genders and socioeconomic levels. However, the performance gap is roughly equivalent to the existing performance gap of the national average for science scores, suggesting little influence from access to a computer on academic achievement. The variability of scores reported in the regression analysis models reflects a moderate to low effect, suggesting an absence of causation. These statistical results are accurate and confirm the literature review, whereby having access to a computer at home and the predictor variables were found to have a significant impact on performance scores, although the data presented suggest computer access at home is less influential upon performance scores than poverty and its correlates.

  13. Biological Model Development as an Opportunity to Provide Content Auditing for the Foundational Model of Anatomy Ontology.

    PubMed

    Wang, Lucy L; Grunblatt, Eli; Jung, Hyunggu; Kalet, Ira J; Whipple, Mark E

    2015-01-01

    Constructing a biological model using an established ontology provides a unique opportunity to perform content auditing on the ontology. We built a Markov chain model to study tumor metastasis in the regional lymphatics of patients with head and neck squamous cell carcinoma (HNSCC). The model attempts to determine regions with high likelihood for metastasis, which guides surgeons and radiation oncologists in selecting the boundaries of treatment. To achieve consistent anatomical relationships, the nodes in our model are populated using lymphatic objects extracted from the Foundational Model of Anatomy (FMA) ontology. During this process, we discovered several classes of inconsistencies in the lymphatic representations within the FMA. We were able to use this model building opportunity to audit the entities and connections in this region of interest (ROI). We found five subclasses of errors that are computationally detectable and resolvable, one subclass of errors that is computationally detectable but unresolvable, requiring the assistance of a content expert, and also errors of content, which cannot be detected through computational means. Mathematical descriptions of detectable errors along with expert review were used to discover inconsistencies and suggest concepts for addition and removal. Out of 106 organ and organ parts in the ROI, 8 unique entities were affected, leading to the suggestion of 30 concepts for addition and 4 for removal. Out of 27 lymphatic chain instances, 23 were found to have errors, with a total of 32 concepts suggested for addition and 15 concepts for removal. These content corrections are necessary for the accurate functioning of the FMA and provide benefits for future research and educational uses.

  14. Biological Model Development as an Opportunity to Provide Content Auditing for the Foundational Model of Anatomy Ontology

    PubMed Central

    Wang, Lucy L.; Grunblatt, Eli; Jung, Hyunggu; Kalet, Ira J.; Whipple, Mark E.

    2015-01-01

    Constructing a biological model using an established ontology provides a unique opportunity to perform content auditing on the ontology. We built a Markov chain model to study tumor metastasis in the regional lymphatics of patients with head and neck squamous cell carcinoma (HNSCC). The model attempts to determine regions with high likelihood for metastasis, which guides surgeons and radiation oncologists in selecting the boundaries of treatment. To achieve consistent anatomical relationships, the nodes in our model are populated using lymphatic objects extracted from the Foundational Model of Anatomy (FMA) ontology. During this process, we discovered several classes of inconsistencies in the lymphatic representations within the FMA. We were able to use this model building opportunity to audit the entities and connections in this region of interest (ROI). We found five subclasses of errors that are computationally detectable and resolvable, one subclass of errors that is computationally detectable but unresolvable, requiring the assistance of a content expert, and also errors of content, which cannot be detected through computational means. Mathematical descriptions of detectable errors along with expert review were used to discover inconsistencies and suggest concepts for addition and removal. Out of 106 organ and organ parts in the ROI, 8 unique entities were affected, leading to the suggestion of 30 concepts for addition and 4 for removal. Out of 27 lymphatic chain instances, 23 were found to have errors, with a total of 32 concepts suggested for addition and 15 concepts for removal. These content corrections are necessary for the accurate functioning of the FMA and provide benefits for future research and educational uses. PMID:26958311

  15. COED Transactions, Vol. IX, No. 10 & No. 11, October/November 1977. Teaching Professional Use of the Computer While Teaching the Major. Computer Applications in Design Instruction.

    ERIC Educational Resources Information Center

    Marcovitz, Alan B., Ed.

    Presented are two papers on computer applications in engineering education coursework. The first paper suggests that since most engineering graduates use only "canned programs" and rarely write their own programs, educational emphasis should include model building and the use of existing software as well as program writing. The second paper deals…

  16. Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; Hirata, Akimasa

    2012-12-01

    In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.

  17. The many worlds hypothesis of dopamine prediction error: implications of a parallel circuit architecture in the basal ganglia.

    PubMed

    Lau, Brian; Monteiro, Tiago; Paton, Joseph J

    2017-10-01

    Computational models of reinforcement learning (RL) strive to produce behavior that maximises reward, and thus allow software or robots to behave adaptively [1]. At the core of RL models is a learned mapping between 'states'-situations or contexts that an agent might encounter in the world-and actions. A wealth of physiological and anatomical data suggests that the basal ganglia (BG) is important for learning these mappings [2,3]. However, the computations performed by specific circuits are unclear. In this brief review, we highlight recent work concerning the anatomy and physiology of BG circuits that suggest refinements in our understanding of computations performed by the basal ganglia. We focus on one important component of basal ganglia circuitry, midbrain dopamine neurons, drawing attention to data that has been cast as supporting or departing from the RL framework that has inspired experiments in basal ganglia research over the past two decades. We suggest that the parallel circuit architecture of the BG might be expected to produce variability in the response properties of different dopamine neurons, and that variability in response profile may not reflect variable functions, but rather different arguments that serve as inputs to a common function: the computation of prediction error. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Ideal Particle Sizes for Inhaled Steroids Targeting Vocal Granulomas: Preliminary Study Using Computational Fluid Dynamics.

    PubMed

    Perkins, Elizabeth L; Basu, Saikat; Garcia, Guilherme J M; Buckmire, Robert A; Shah, Rupali N; Kimbell, Julia S

    2018-03-01

    Objectives Vocal fold granulomas are benign lesions of the larynx commonly caused by gastroesophageal reflux, intubation, and phonotrauma. Current medical therapy includes inhaled corticosteroids to target inflammation that leads to granuloma formation. Particle sizes of commonly prescribed inhalers range over 1 to 4 µm. The study objective was to use computational fluid dynamics to investigate deposition patterns over a range of particle sizes of inhaled corticosteroids targeting the larynx and vocal fold granulomas. Study Design Retrospective, case-specific computational study. Setting Tertiary academic center. Subjects/Methods A 3-dimensional anatomically realistic computational model of a normal adult airway from mouth to trachea was constructed from 3 computed tomography scans. Virtual granulomas of varying sizes and positions along the vocal fold were incorporated into the base model. Assuming steady-state, inspiratory, turbulent airflow at 30 L/min, computational fluid dynamics was used to simulate respiratory transport and deposition of inhaled corticosteroid particles ranging over 1 to 20 µm. Results Laryngeal deposition in the base model peaked for particle sizes 8 to 10 µm (2.8%-3.5%). Ideal sizes ranged over 6 to 10, 7 to 13, and 7 to 14 µm for small, medium, and large granuloma sizes, respectively. Glottic deposition was maximal at 10.8% for 9-µm-sized particles for the large posterior granuloma, 3 times the normal model (3.5%). Conclusion As the virtual granuloma size increased and the location became more posterior, glottic deposition and ideal particle size generally increased. This preliminary study suggests that inhalers with larger particle sizes, such as fluticasone propionate dry-powder inhaler, may improve laryngeal drug deposition. Most commercially available inhalers have smaller particles than suggested here.

  19. Transforming parts of a differential equations system to difference equations as a method for run-time savings in NONMEM.

    PubMed

    Petersson, K J F; Friberg, L E; Karlsson, M O

    2010-10-01

    Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.

  20. Computer Administering of the Psychological Investigations: Set-Relational Representation

    NASA Astrophysics Data System (ADS)

    Yordzhev, Krasimir

    Computer administering of a psychological investigation is the computer representation of the entire procedure of psychological assessments - test construction, test implementation, results evaluation, storage and maintenance of the developed database, its statistical processing, analysis and interpretation. A mathematical description of psychological assessment with the aid of personality tests is discussed in this article. The set theory and the relational algebra are used in this description. A relational model of data, needed to design a computer system for automation of certain psychological assessments is given. Some finite sets and relation on them, which are necessary for creating a personality psychological test, are described. The described model could be used to develop real software for computer administering of any psychological test and there is full automation of the whole process: test construction, test implementation, result evaluation, storage of the developed database, statistical implementation, analysis and interpretation. A software project for computer administering personality psychological tests is suggested.

  1. Perspectives for computational modeling of cell replacement for neurological disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimone, James B.; Weick, Jason P.

    In mathematical modeling of anatomically-constrained neural networks we provide significant insights regarding the response of networks to neurological disorders or injury. Furthermore, a logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impactmore » circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less

  2. Perspectives for computational modeling of cell replacement for neurological disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimone, James B.; Weick, Jason P.

    Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behaviormore » in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less

  3. A Study of the Efficacy of Project-Based Learning Integrated with Computer-Based Simulation--STELLA

    ERIC Educational Resources Information Center

    Eskrootchi, Rogheyeh; Oskrochi, G. Reza

    2010-01-01

    Incorporating computer-simulation modelling into project-based learning may be effective but requires careful planning and implementation. Teachers, especially, need pedagogical content knowledge which refers to knowledge about how students learn from materials infused with technology. This study suggests that students learn best by actively…

  4. ENZVU--An Enzyme Kinetics Computer Simulation Based upon a Conceptual Model of Enzyme Action.

    ERIC Educational Resources Information Center

    Graham, Ian

    1985-01-01

    Discusses a simulation on enzyme kinetics based upon the ability of computers to generate random numbers. The program includes: (1) enzyme catalysis in a restricted two-dimensional grid; (2) visual representation of catalysis; and (3) storage and manipulation of data. Suggested applications and conclusions are also discussed. (DH)

  5. Using Intelligent Tutoring Design Principles To Integrate Cognitive Theory into Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Orey, Michael A.; Nelson, Wayne A.

    Arguing that the evolution of intelligent tutoring systems better reflects the recent theoretical developments of cognitive science than traditional computer-based instruction (CBI), this paper describes a general model for an intelligent tutoring system and suggests ways to improve CBI using design principles derived from research in cognitive…

  6. Comparative Effects of Ability and Feedback Form in Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Smith, Lana J.

    A study involving 50 experimental and 99 control subjects (graduate education majors) was undertaken to assess the interchangeability of knowledge of correct response feedback (KRC) and answer until correct feedback (AUC) in computer-assisted instruction. P. L. Smith's model (1988) suggests that AUC in better for high-ability students. W. Dick and…

  7. Tying Theory To Practice: Cognitive Aspects of Computer Interaction in the Design Process.

    ERIC Educational Resources Information Center

    Mikovec, Amy E.; Dake, Dennis M.

    The new medium of computer-aided design requires changes to the creative problem-solving methodologies typically employed in the development of new visual designs. Most theoretical models of creative problem-solving suggest a linear progression from preparation and incubation to some type of evaluative study of the "inspiration." These…

  8. The Sensitivity of Memory Consolidation and Reconsolidation to Inhibitors of Protein Synthesis and Kinases: Computational Analysis

    ERIC Educational Resources Information Center

    Zhang, Yili; Smolen, Paul; Baxter, Douglas A.; Byrne, John H.

    2010-01-01

    Memory consolidation and reconsolidation require kinase activation and protein synthesis. Blocking either process during or shortly after training or recall disrupts memory stabilization, which suggests the existence of a critical time window during which these processes are necessary. Using a computational model of kinase synthesis and…

  9. Conifer ovulate cones accumulate pollen principally by simple impaction.

    PubMed

    Cresswell, James E; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A; Young, Phillipe G; Tabor, Gavin R

    2007-11-13

    In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones.

  10. Conifer ovulate cones accumulate pollen principally by simple impaction

    PubMed Central

    Cresswell, James E.; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A.; Young, Phillipe G.; Tabor, Gavin R.

    2007-01-01

    In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones. PMID:17986613

  11. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Dental application of novel finite element analysis software for three-dimensional finite element modeling of a dentulous mandible from its computed tomography images.

    PubMed

    Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich

    2013-12-01

    This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.

  13. A computational model of the human visual cortex

    NASA Astrophysics Data System (ADS)

    Albus, James S.

    2008-04-01

    The brain is first and foremost a control system that is capable of building an internal representation of the external world, and using this representation to make decisions, set goals and priorities, formulate plans, and control behavior with intent to achieve its goals. The computational model proposed here assumes that this internal representation resides in arrays of cortical columns. More specifically, it models each cortical hypercolumn together with its underlying thalamic nuclei as a Fundamental Computational Unit (FCU) consisting of a frame-like data structure (containing attributes and pointers) plus the computational processes and mechanisms required to maintain it. In sensory-processing areas of the brain, FCUs enable segmentation, grouping, and classification. Pointers stored in FCU frames link pixels and signals to objects and events in situations and episodes that are overlaid with meaning and emotional values. In behavior-generating areas of the brain, FCUs make decisions, set goals and priorities, generate plans, and control behavior. Pointers are used to define rules, grammars, procedures, plans, and behaviors. It is suggested that it may be possible to reverse engineer the human brain at the FCU level of fidelity using nextgeneration massively parallel computer hardware and software. Key Words: computational modeling, human cortex, brain modeling, reverse engineering the brain, image processing, perception, segmentation, knowledge representation

  14. Effect of bulk modulus on deformation of the brain under rotational accelerations

    NASA Astrophysics Data System (ADS)

    Ganpule, S.; Daphalapurkar, N. P.; Cetingul, M. P.; Ramesh, K. T.

    2018-01-01

    Traumatic brain injury such as that developed as a consequence of blast is a complex injury with a broad range of symptoms and disabilities. Computational models of brain biomechanics hold promise for illuminating the mechanics of traumatic brain injury and for developing preventive devices. However, reliable material parameters are needed for models to be predictive. Unfortunately, the properties of human brain tissue are difficult to measure, and the bulk modulus of brain tissue in particular is not well characterized. Thus, a wide range of bulk modulus values are used in computational models of brain biomechanics, spanning up to three orders of magnitude in the differences between values. However, the sensitivity of these variations on computational predictions is not known. In this work, we study the sensitivity of a 3D computational human head model to various bulk modulus values. A subject-specific human head model was constructed from T1-weighted MRI images at 2-mm3 voxel resolution. Diffusion tensor imaging provided data on spatial distribution and orientation of axonal fiber bundles for modeling white matter anisotropy. Non-injurious, full-field brain deformations in a human volunteer were used to assess the simulated predictions. The comparison suggests that a bulk modulus value on the order of GPa gives the best agreement with experimentally measured in vivo deformations in the human brain. Further, simulations of injurious loading suggest that bulk modulus values on the order of GPa provide the closest match with the clinical findings in terms of predicated injured regions and extent of injury.

  15. Computing by physical interaction in neurons.

    PubMed

    Aur, Dorian; Jog, Mandar; Poznanski, Roman R

    2011-12-01

    The electrodynamics of action potentials represents the fundamental level where information is integrated and processed in neurons. The Hodgkin-Huxley model cannot explain the non-stereotyped spatial charge density dynamics that occur during action potential propagation. Revealed in experiments as spike directivity, the non-uniform charge density dynamics within neurons carry meaningful information and suggest that fragments of information regarding our memories are endogenously stored in structural patterns at a molecular level and are revealed only during spiking activity. The main conceptual idea is that under the influence of electric fields, efficient computation by interaction occurs between charge densities embedded within molecular structures and the transient developed flow of electrical charges. This process of computation underlying electrical interactions and molecular mechanisms at the subcellular level is dissimilar from spiking neuron models that are completely devoid of physical interactions. Computation by interaction describes a more powerful continuous model of computation than the one that consists of discrete steps as represented in Turing machines.

  16. A Systems Biology Approach to Heat Stress, Heat Injury and Heat Stroke

    DTIC Science & Technology

    2015-01-01

    Winkler et al., “Computational lipidology: predicting lipoprotein density profiles in human blood plasma,” PLoS Comput Biol, 4(5), e1000079 (2008). [74...other organs at high risk for injury, such as liver and kidney [24, 25]. 2.1 Utility of the computational model Molecular indicators of heat...induced heart injury had a large shift in relative abundance of proteins with high supersaturation scores, suggesting increased abundance of

  17. Converting differential-equation models of biological systems to membrane computing.

    PubMed

    Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W

    2013-12-01

    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. How does the brain solve visual object recognition?

    PubMed Central

    Zoccolan, Davide; Rust, Nicole C.

    2012-01-01

    Mounting evidence suggests that “core object recognition,” the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains little-understood. Here we review evidence ranging from individual neurons, to neuronal populations, to behavior, to computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical sub-networks with a common functional goal. PMID:22325196

  19. Transfer of computer software technology through workshops: The case of fish bioenergetics modeling

    USGS Publications Warehouse

    Johnson, B.L.

    1992-01-01

    A three-part program is proposed to promote the availability and use of computer software packages to fishery managers and researchers. The approach consists of journal articles that announce new technologies, technical reports that serve as user's guides, and hands-on workshops that provide direct instruction to new users. Workshops, which allow experienced users to directly instruct novices in software operation and application are important, but often neglected. The author's experience with organizing and conducting bioenergetics modeling workshops suggests the optimal workshop would take 2 days, have 10-15 participants, one computer for every two users, and one instructor for every 5-6 people.

  20. Modeling the Milky Way: Spreadsheet Science.

    ERIC Educational Resources Information Center

    Whitmer, John C.

    1990-01-01

    Described is the generation of a scale model of the solar system and the milky way galaxy using a computer spreadsheet program. A sample spreadsheet including cell formulas is provided. Suggestions for using this activity as a teaching technique are included. (CW)

  1. (abstract) A Comparison Between Measurements of the F-layer Critical Frequency and Values Derived from the PRISM Adjustment Algorithm Applied to Total Electron Content Data in the Equatorial Region

    NASA Technical Reports Server (NTRS)

    Mannucci, A. J.; Anderson, D. N.; Abdu, A. M.

    1994-01-01

    The Parametrized Real-Time Ionosphere Specification Model (PRISM) is a global ionospheric specification model that can incorporate real-time data to compute accurate electron density profiles. Time series of computed and measured data are compared in this paper. This comparison can be used to suggest methods of optimizing the PRISM adjustment algorithm for TEC data obtained at low altitudes.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dress, W.B.

    Rosen's modeling relation is embedded in Popper's three worlds to provide an heuristic tool for model building and a guide for thinking about complex systems. The utility of this construct is demonstrated by suggesting a solution to the problem of pseudo science and a resolution of the famous Bohr-Einstein debates. A theory of bizarre systems is presented by an analogy with entangled particles of quantum mechanics. This theory underscores the poverty of present-day computational systems (e.g., computers) for creating complex and bizarre entities by distinguishing between mechanism and organism.

  3. Dopamine selectively remediates ‘model-based’ reward learning: a computational approach

    PubMed Central

    Sharp, Madeleine E.; Foerde, Karin; Daw, Nathaniel D.

    2016-01-01

    Patients with loss of dopamine due to Parkinson’s disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from ‘model-free’ learning. The other, ‘model-based’ learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson’s disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson’s disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson’s disease may be related to an inability to pursue reward based on complete representations of the environment. PMID:26685155

  4. A hierarchical competing systems model of the emergence and early development of executive function

    PubMed Central

    Marcovitch, Stuart; Zelazo, Philip David

    2010-01-01

    The hierarchical competing systems model (HCSM) provides a framework for understanding the emergence and early development of executive function – the cognitive processes underlying the conscious control of behavior – in the context of search for hidden objects. According to this model, behavior is determined by the joint influence of a developmentally invariant habit system and a conscious representational system that becomes increasingly influential as children develop. This article describes a computational formalization of the HCSM, reviews behavioral and computational research consistent with the model, and suggests directions for future research on the development of executive function. PMID:19120405

  5. The potential value of Clostridium difficile vaccine: an economic computer simulation model.

    PubMed

    Lee, Bruce Y; Popovich, Michael J; Tian, Ye; Bailey, Rachel R; Ufberg, Paul J; Wiringa, Ann E; Muder, Robert R

    2010-07-19

    Efforts are currently underway to develop a vaccine against Clostridium difficile infection (CDI). We developed two decision analytic Monte Carlo computer simulation models: (1) an Initial Prevention Model depicting the decision whether to administer C. difficile vaccine to patients at-risk for CDI and (2) a Recurrence Prevention Model depicting the decision whether to administer C. difficile vaccine to prevent CDI recurrence. Our results suggest that a C. difficile vaccine could be cost-effective over a wide range of C. difficile risk, vaccine costs, and vaccine efficacies especially, when being used post-CDI treatment to prevent recurrent disease. (c) 2010 Elsevier Ltd. All rights reserved.

  6. The Potential Value of Clostridium difficile Vaccine: An Economic Computer Simulation Model

    PubMed Central

    Lee, Bruce Y.; Popovich, Michael J.; Tian, Ye; Bailey, Rachel R.; Ufberg, Paul J.; Wiringa, Ann E.; Muder, Robert R.

    2010-01-01

    Efforts are currently underway to develop a vaccine against Clostridium difficile infection (CDI). We developed two decision analytic Monte Carlo computer simulation models: (1) an Initial Prevention Model depicting the decision whether to administer C. difficile vaccine to patients at-risk for CDI and (2) a Recurrence Prevention Model depicting the decision whether to administer C. difficile vaccine to prevent CDI recurrence. Our results suggest that a C. difficile vaccine could be cost-effective over a wide range of C. difficile risk, vaccine costs, and vaccine efficacies especially when being used post-CDI treatment to prevent recurrent disease. PMID:20541582

  7. Design of Intelligent Robot as A Tool for Teaching Media Based on Computer Interactive Learning and Computer Assisted Learning to Improve the Skill of University Student

    NASA Astrophysics Data System (ADS)

    Zuhrie, M. S.; Basuki, I.; Asto B, I. G. P.; Anifah, L.

    2018-01-01

    The focus of the research is the teaching module which incorporates manufacturing, planning mechanical designing, controlling system through microprocessor technology and maneuverability of the robot. Computer interactive and computer-assisted learning is strategies that emphasize the use of computers and learning aids (computer assisted learning) in teaching and learning activity. This research applied the 4-D model research and development. The model is suggested by Thiagarajan, et.al (1974). 4-D Model consists of four stages: Define Stage, Design Stage, Develop Stage, and Disseminate Stage. This research was conducted by applying the research design development with an objective to produce a tool of learning in the form of intelligent robot modules and kit based on Computer Interactive Learning and Computer Assisted Learning. From the data of the Indonesia Robot Contest during the period of 2009-2015, it can be seen that the modules that have been developed confirm the fourth stage of the research methods of development; disseminate method. The modules which have been developed for students guide students to produce Intelligent Robot Tool for Teaching Based on Computer Interactive Learning and Computer Assisted Learning. Results of students’ responses also showed a positive feedback to relate to the module of robotics and computer-based interactive learning.

  8. Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition

    NASA Astrophysics Data System (ADS)

    Fitch, W. Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.

  9. Toward a computational framework for cognitive biology: unifying approaches from cognitive neuroscience and comparative cognition.

    PubMed

    Fitch, W Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology. Copyright © 2014. Published by Elsevier B.V.

  10. The k-d Tree: A Hierarchical Model for Human Cognition.

    ERIC Educational Resources Information Center

    Vandendorpe, Mary M.

    This paper discusses a model of information storage and retrieval, the k-d tree (Bentley, 1975), a binary, hierarchical tree with multiple associate terms, which has been explored in computer research, and it is suggested that this model could be useful for describing human cognition. Included are two models of human long-term memory--networks and…

  11. Product placement of computer games in cyberspace.

    PubMed

    Yang, Heng-Li; Wang, Cheng-Shu

    2008-08-01

    Computer games are considered an emerging media and are even regarded as an advertising channel. By a three-phase experiment, this study investigated the advertising effectiveness of computer games for different product placement forms, product types, and their combinations. As the statistical results revealed, computer games are appropriate for placement advertising. Additionally, different product types and placement forms produced different advertising effectiveness. Optimum combinations of product types and placement forms existed. An advertisement design model is proposed for use in game design environments. Some suggestions are given for advertisers and game companies respectively.

  12. Optimal control in a model of malaria with differential susceptibility

    NASA Astrophysics Data System (ADS)

    Hincapié, Doracelly; Ospina, Juan

    2014-06-01

    A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.

  13. Computational knee ligament modeling using experimentally determined zero-load lengths.

    PubMed

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models.

  14. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  15. Railroads and the Environment : Estimation of Fuel Consumption in Rail Transportation : Volume 1. Analytical Model

    DOT National Transportation Integrated Search

    1975-05-01

    The report describes an analytical approach to estimation of fuel consumption in rail transportation, and provides sample computer calculations suggesting the sensitivity of fuel usage to various parameters. The model used is based upon careful delin...

  16. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  17. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed Central

    Kong, A; Cox, N J

    1997-01-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087

  18. Application of Psychological Theories in Agent-Based Modeling: The Case of the Theory of Planned Behavior.

    PubMed

    Scalco, Andrea; Ceschi, Andrea; Sartori, Riccardo

    2018-01-01

    It is likely that computer simulations will assume a greater role in the next future to investigate and understand reality (Rand & Rust, 2011). Particularly, agent-based models (ABMs) represent a method of investigation of social phenomena that blend the knowledge of social sciences with the advantages of virtual simulations. Within this context, the development of algorithms able to recreate the reasoning engine of autonomous virtual agents represents one of the most fragile aspects and it is indeed crucial to establish such models on well-supported psychological theoretical frameworks. For this reason, the present work discusses the application case of the theory of planned behavior (TPB; Ajzen, 1991) in the context of agent-based modeling: It is argued that this framework might be helpful more than others to develop a valid representation of human behavior in computer simulations. Accordingly, the current contribution considers issues related with the application of the model proposed by the TPB inside computer simulations and suggests potential solutions with the hope to contribute to shorten the distance between the fields of psychology and computer science.

  19. Computer-aided design of the human aortic root.

    PubMed

    Ovcharenko, E A; Klyshnikov, K U; Vlad, A R; Sizova, I N; Kokov, A N; Nushtaev, D V; Yuzhalin, A E; Zhuravleva, I U

    2014-11-01

    The development of computer-based 3D models of the aortic root is one of the most important problems in constructing the prostheses for transcatheter aortic valve implantation. In the current study, we analyzed data from 117 patients with and without aortic valve disease and computed tomography data from 20 patients without aortic valvular diseases in order to estimate the average values of the diameter of the aortic annulus and other aortic root parameters. Based on these data, we developed a 3D model of human aortic root with unique geometry. Furthermore, in this study we show that by applying different material properties to the aortic annulus zone in our model, we can significantly improve the quality of the results of finite element analysis. To summarize, here we present four 3D models of human aortic root with unique geometry based on computational analysis of ECHO and CT data. We suggest that our models can be utilized for the development of better prostheses for transcatheter aortic valve implantation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Role of Statistical Random-Effects Linear Models in Personalized Medicine.

    PubMed

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-03-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.

  1. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  2. Some Observations on the Current Status of Performing Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.

    2015-01-01

    Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.

  3. Black hole Brownian motion in a rotating environment

    NASA Astrophysics Data System (ADS)

    Lingam, Manasvi

    2018-01-01

    A Langevin equation is set up to model the dynamics of a supermassive black hole (massive particle) in a rotating environment (of light particles), typically the inner region of the galaxy, under the influence of dynamical friction, gravity and stochastic forces. The formal solution is derived, and the displacement and velocity two-point correlation functions are computed. The correlators perpendicular to the axis of rotation are equal to one another and different from those parallel to the axis. By computing this difference, it is suggested that one can, perhaps, observationally determine the magnitude of the rotation. In the case with sufficiently fast rotation, it is suggested that this model can lead to an ejection. If either one of dynamical friction and Eddington accretion is included, it is shown that a near-identical Langevin equation follows, allowing us to treat the two cases in a unified manner. The limitations of the model are also presented and compared against previous results.

  4. Computer Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less

  5. Inhalation toxicity of indoor air pollutants in Drosophila melanogaster using integrated transcriptomics and computational behavior analyses

    NASA Astrophysics Data System (ADS)

    Eom, Hyun-Jeong; Liu, Yuedan; Kwak, Gyu-Suk; Heo, Muyoung; Song, Kyung Seuk; Chung, Yun Doo; Chon, Tae-Soo; Choi, Jinhee

    2017-06-01

    We conducted an inhalation toxicity test on the alternative animal model, Drosophila melanogaster, to investigate potential hazards of indoor air pollution. The inhalation toxicity of toluene and formaldehyde was investigated using comprehensive transcriptomics and computational behavior analyses. The ingenuity pathway analysis (IPA) based on microarray data suggests the involvement of pathways related to immune response, stress response, and metabolism in formaldehyde and toluene exposure based on hub molecules. We conducted a toxicity test using mutants of the representative genes in these pathways to explore the toxicological consequences of alterations of these pathways. Furthermore, extensive computational behavior analysis showed that exposure to either toluene or formaldehyde reduced most of the behavioral parameters of both wild-type and mutants. Interestingly, behavioral alteration caused by toluene or formaldehyde exposure was most severe in the p38b mutant, suggesting that the defects in the p38 pathway underlie behavioral alteration. Overall, the results indicate that exposure to toluene and formaldehyde via inhalation causes severe toxicity in Drosophila, by inducing significant alterations in gene expression and behavior, suggesting that Drosophila can be used as a potential alternative model in inhalation toxicity screening.

  6. Inhalation toxicity of indoor air pollutants in Drosophila melanogaster using integrated transcriptomics and computational behavior analyses

    PubMed Central

    Eom, Hyun-Jeong; Liu, Yuedan; Kwak, Gyu-Suk; Heo, Muyoung; Song, Kyung Seuk; Chung, Yun Doo; Chon, Tae-Soo; Choi, Jinhee

    2017-01-01

    We conducted an inhalation toxicity test on the alternative animal model, Drosophila melanogaster, to investigate potential hazards of indoor air pollution. The inhalation toxicity of toluene and formaldehyde was investigated using comprehensive transcriptomics and computational behavior analyses. The ingenuity pathway analysis (IPA) based on microarray data suggests the involvement of pathways related to immune response, stress response, and metabolism in formaldehyde and toluene exposure based on hub molecules. We conducted a toxicity test using mutants of the representative genes in these pathways to explore the toxicological consequences of alterations of these pathways. Furthermore, extensive computational behavior analysis showed that exposure to either toluene or formaldehyde reduced most of the behavioral parameters of both wild-type and mutants. Interestingly, behavioral alteration caused by toluene or formaldehyde exposure was most severe in the p38b mutant, suggesting that the defects in the p38 pathway underlie behavioral alteration. Overall, the results indicate that exposure to toluene and formaldehyde via inhalation causes severe toxicity in Drosophila, by inducing significant alterations in gene expression and behavior, suggesting that Drosophila can be used as a potential alternative model in inhalation toxicity screening. PMID:28621308

  7. Computational modelling of cellular level metabolism

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Heino, J.; Somersalo, E.

    2008-07-01

    The steady and stationary state inverse problems consist of estimating the reaction and transport fluxes, blood concentrations and possibly the rates of change of some of the concentrations based on data which are often scarce noisy and sampled over a population. The Bayesian framework provides a natural setting for the solution of this inverse problem, because a priori knowledge about the system itself and the unknown reaction fluxes and transport rates can compensate for the insufficiency of measured data, provided that the computational costs do not become prohibitive. This article identifies the computational challenges which have to be met when analyzing the steady and stationary states of multicompartment model for cellular metabolism and suggest stable and efficient ways to handle the computations. The outline of a computational tool based on the Bayesian paradigm for the simulation and analysis of complex cellular metabolic systems is also presented.

  8. Modeling Fish Growth in Low Dissolved Oxygen

    ERIC Educational Resources Information Center

    Neilan, Rachael Miller

    2013-01-01

    This article describes a computational project designed for undergraduate students as an introduction to mathematical modeling. Students use an ordinary differential equation to describe fish weight and assume the instantaneous growth rate depends on the concentration of dissolved oxygen. Published laboratory experiments suggest that continuous…

  9. Ice-sheet modelling accelerated by graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  10. Subject-specific computer simulation model for determining elbow loading in one-handed tennis backhand groundstrokes.

    PubMed

    King, Mark A; Glynn, Jonathan A; Mitchell, Sean R

    2011-11-01

    A subject-specific angle-driven computer model of a tennis player, combined with a forward dynamics, equipment-specific computer model of tennis ball-racket impacts, was developed to determine the effect of ball-racket impacts on loading at the elbow for one-handed backhand groundstrokes. Matching subject-specific computer simulations of a typical topspin/slice one-handed backhand groundstroke performed by an elite tennis player were done with root mean square differences between performance and matching simulations of < 0.5 degrees over a 50 ms period starting from ball impact. Simulation results suggest that for similar ball-racket impact conditions, the difference in elbow loading for a topspin and slice one-handed backhand groundstroke is relatively small. In this study, the relatively small differences in elbow loading may be due to comparable angle-time histories at the wrist and elbow joints with the major kinematic differences occurring at the shoulder. Using a subject-specific angle-driven computer model combined with a forward dynamics, equipment-specific computer model of tennis ball-racket impacts allows peak internal loading, net impulse, and shock due to ball-racket impact to be calculated which would not otherwise be possible without impractical invasive techniques. This study provides a basis for further investigation of the factors that may increase elbow loading during tennis strokes.

  11. CADRE-SS, an in Silico Tool for Predicting Skin Sensitization Potential Based on Modeling of Molecular Interactions.

    PubMed

    Kostal, Jakub; Voutchkova-Kostal, Adelina

    2016-01-19

    Using computer models to accurately predict toxicity outcomes is considered to be a major challenge. However, state-of-the-art computational chemistry techniques can now be incorporated in predictive models, supported by advances in mechanistic toxicology and the exponential growth of computing resources witnessed over the past decade. The CADRE (Computer-Aided Discovery and REdesign) platform relies on quantum-mechanical modeling of molecular interactions that represent key biochemical triggers in toxicity pathways. Here, we present an external validation exercise for CADRE-SS, a variant developed to predict the skin sensitization potential of commercial chemicals. CADRE-SS is a hybrid model that evaluates skin permeability using Monte Carlo simulations, assigns reactive centers in a molecule and possible biotransformations via expert rules, and determines reactivity with skin proteins via quantum-mechanical modeling. The results were promising with an overall very good concordance of 93% between experimental and predicted values. Comparison to performance metrics yielded by other tools available for this endpoint suggests that CADRE-SS offers distinct advantages for first-round screenings of chemicals and could be used as an in silico alternative to animal tests where permissible by legislative programs.

  12. Teacher Challenges, Perceptions, and Use of Science Models in Middle School Classrooms about Climate, Weather, and Energy Concepts

    ERIC Educational Resources Information Center

    Yarker, Morgan Brown

    2013-01-01

    Research suggests that scientific models and modeling should be topics covered in K-12 classrooms as part of a comprehensive science curriculum. It is especially important when talking about topics in weather and climate, where computer and forecast models are the center of attention. There are several approaches to model based inquiry, but it can…

  13. Reversibility and measurement in quantum computing

    NASA Astrophysics Data System (ADS)

    Leãao, J. P.

    1998-03-01

    The relation between computation and measurement at a fundamental physical level is yet to be understood. Rolf Landauer was perhaps the first to stress the strong analogy between these two concepts. His early queries have regained pertinence with the recent efforts to developed realizable models of quantum computers. In this context the irreversibility of quantum measurement appears in conflict with the requirement of reversibility of the overall computation associated with the unitary dynamics of quantum evolution. The latter in turn is responsible for the features of superposition and entanglement which make some quantum algorithms superior to classical ones for the same task in speed and resource demand. In this article we advocate an approach to this question which relies on a model of computation designed to enforce the analogy between the two concepts instead of demarcating them as it has been the case so far. The model is introduced as a symmetrization of the classical Turing machine model and is then carried on to quantum mechanics, first as a an abstract local interaction scheme (symbolic measurement) and finally in a nonlocal noninteractive implementation based on Aharonov-Bohm potentials and modular variables. It is suggested that this implementation leads to the most ubiquitous of quantum algorithms: the Discrete Fourier Transform.

  14. Conflicts of interest improve collective computation of adaptive social structures

    PubMed Central

    Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.

    2018-01-01

    In many biological systems, the functional behavior of a group is collectively computed by the system’s individual components. An example is the brain’s ability to make decisions via the activity of billions of neurons. A long-standing puzzle is how the components’ decisions combine to produce beneficial group-level outputs, despite conflicts of interest and imperfect information. We derive a theoretical model of collective computation from mechanistic first principles, using results from previous work on the computation of power structure in a primate model system. Collective computation has two phases: an information accumulation phase, in which (in this study) pairs of individuals gather information about their fighting abilities and make decisions about their dominance relationships, and an information aggregation phase, in which these decisions are combined to produce a collective computation. To model information accumulation, we extend a stochastic decision-making model—the leaky integrator model used to study neural decision-making—to a multiagent game-theoretic framework. We then test alternative algorithms for aggregating information—in this study, decisions about dominance resulting from the stochastic model—and measure the mutual information between the resultant power structure and the “true” fighting abilities. We find that conflicts of interest can improve accuracy to the benefit of all agents. We also find that the computation can be tuned to produce different power structures by changing the cost of waiting for a decision. The successful application of a similar stochastic decision-making model in neural and social contexts suggests general principles of collective computation across substrates and scales. PMID:29376116

  15. Computer-simulated laboratory explorations for middle school life, earth, and physical Science

    NASA Astrophysics Data System (ADS)

    von Blum, Ruth

    1992-06-01

    Explorations in Middle School Science is a set of 72 computer-simulated laboratory lessons in life, earth, and physical Science for grades 6 9 developed by Jostens Learning Corporation with grants from the California State Department of Education and the National Science Foundation.3 At the heart of each lesson is a computer-simulated laboratory that actively involves students in doing science improving their: (1) understanding of science concepts by applying critical thinking to solve real problems; (2) skills in scientific processes and communications; and (3) attitudes about science. Students use on-line tools (notebook, calculator, word processor) to undertake in-depth investigations of phenomena (like motion in outer space, disease transmission, volcanic eruptions, or the structure of the atom) that would be too difficult, dangerous, or outright impossible to do in a “live” laboratory. Suggested extension activities lead students to hands-on investigations, away from the computer. This article presents the underlying rationale, instructional model, and process by which Explorations was designed and developed. It also describes the general courseware structure and three lesson's in detail, as well as presenting preliminary data from the evaluation. Finally, it suggests a model for incorporating technology into the science classroom.

  16. Toward a multiscale modeling framework for understanding serotonergic function

    PubMed Central

    Wong-Lin, KongFatt; Wang, Da-Hui; Moustafa, Ahmed A; Cohen, Jeremiah Y; Nakamura, Kae

    2017-01-01

    Despite its importance in regulating emotion and mental wellbeing, the complex structure and function of the serotonergic system present formidable challenges toward understanding its mechanisms. In this paper, we review studies investigating the interactions between serotonergic and related brain systems and their behavior at multiple scales, with a focus on biologically-based computational modeling. We first discuss serotonergic intracellular signaling and neuronal excitability, followed by neuronal circuit and systems levels. At each level of organization, we will discuss the experimental work accompanied by related computational modeling work. We then suggest that a multiscale modeling approach that integrates the various levels of neurobiological organization could potentially transform the way we understand the complex functions associated with serotonin. PMID:28417684

  17. Mathematical and Computational Modeling for Tumor Virotherapy with Mediated Immunity.

    PubMed

    Timalsina, Asim; Tian, Jianjun Paul; Wang, Jin

    2017-08-01

    We propose a new mathematical modeling framework based on partial differential equations to study tumor virotherapy with mediated immunity. The model incorporates both innate and adaptive immune responses and represents the complex interaction among tumor cells, oncolytic viruses, and immune systems on a domain with a moving boundary. Using carefully designed computational methods, we conduct extensive numerical simulation to the model. The results allow us to examine tumor development under a wide range of settings and provide insight into several important aspects of the virotherapy, including the dependence of the efficacy on a few key parameters and the delay in the adaptive immunity. Our findings also suggest possible ways to improve the virotherapy for tumor treatment.

  18. The algorithmic anatomy of model-based evaluation

    PubMed Central

    Daw, Nathaniel D.; Dayan, Peter

    2014-01-01

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820

  19. The efficient model to define a single light source position by use of high dynamic range image of 3D scene

    NASA Astrophysics Data System (ADS)

    Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han

    2016-10-01

    One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.

  20. A computational model of selection by consequences: log survivor plots.

    PubMed

    Kulubekova, Saule; McDowell, J J

    2008-06-01

    [McDowell, J.J, 2004. A computational model of selection by consequences. J. Exp. Anal. Behav. 81, 297-317] instantiated the principle of selection by consequences in a virtual organism with an evolving repertoire of possible behaviors undergoing selection, reproduction, and mutation over many generations. The process is based on the computational approach, which is non-deterministic and rules-based. The model proposes a causal account for operant behavior. McDowell found that the virtual organism consistently showed a hyperbolic relationship between response and reinforcement rates according to the quantitative law of effect. To continue validation of the computational model, the present study examined its behavior on the molecular level by comparing the virtual organism's IRT distributions in the form of log survivor plots to findings from live organisms. Log survivor plots did not show the "broken-stick" feature indicative of distinct bouts and pauses in responding, although the bend in slope of the plots became more defined at low reinforcement rates. The shape of the virtual organism's log survivor plots was more consistent with the data on reinforced responding in pigeons. These results suggest that log survivor plot patterns of the virtual organism were generally consistent with the findings from live organisms providing further support for the computational model of selection by consequences as a viable account of operant behavior.

  1. A computational psychiatry approach identifies how alpha-2A noradrenergic agonist Guanfacine affects feature-based reinforcement learning in the macaque

    PubMed Central

    Hassani, S. A.; Oemisch, M.; Balcarras, M.; Westendorff, S.; Ardid, S.; van der Meer, M. A.; Tiesinga, P.; Womelsdorf, T.

    2017-01-01

    Noradrenaline is believed to support cognitive flexibility through the alpha 2A noradrenergic receptor (a2A-NAR) acting in prefrontal cortex. Enhanced flexibility has been inferred from improved working memory with the a2A-NA agonist Guanfacine. But it has been unclear whether Guanfacine improves specific attention and learning mechanisms beyond working memory, and whether the drug effects can be formalized computationally to allow single subject predictions. We tested and confirmed these suggestions in a case study with a healthy nonhuman primate performing a feature-based reversal learning task evaluating performance using Bayesian and Reinforcement learning models. In an initial dose-testing phase we found a Guanfacine dose that increased performance accuracy, decreased distractibility and improved learning. In a second experimental phase using only that dose we examined the faster feature-based reversal learning with Guanfacine with single-subject computational modeling. Parameter estimation suggested that improved learning is not accounted for by varying a single reinforcement learning mechanism, but by changing the set of parameter values to higher learning rates and stronger suppression of non-chosen over chosen feature information. These findings provide an important starting point for developing nonhuman primate models to discern the synaptic mechanisms of attention and learning functions within the context of a computational neuropsychiatry framework. PMID:28091572

  2. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    PubMed

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Model Experiment of Two-Dimentional Brownian Motion by Microcomputer.

    ERIC Educational Resources Information Center

    Mishima, Nobuhiko; And Others

    1980-01-01

    Describes the use of a microcomputer in studying a model experiment (Brownian particles colliding with thermal particles). A flow chart and program for the experiment are provided. Suggests that this experiment may foster a deepened understanding through mutual dialog between the student and computer. (SK)

  4. AMP and adenosine are both ligands for adenosine 2B receptor signaling.

    PubMed

    Holien, Jessica K; Seibt, Benjamin; Roberts, Veena; Salvaris, Evelyn; Parker, Michael W; Cowan, Peter J; Dwyer, Karen M

    2018-01-15

    Adenosine is considered the canonical ligand for the adenosine 2B receptor (A 2B R). A 2B R is upregulated following kidney ischemia augmenting post ischemic blood flow and limiting tubular injury. In this context the beneficial effect of A 2B R signaling has been attributed to an increase in the pericellular concentration of adenosine. However, following renal ischemia both kidney adenosine monophosphate (AMP) and adenosine levels are substantially increased. Using computational modeling and calcium mobilization assays, we investigated whether AMP could also be a ligand for A 2B R. The computational modeling suggested that AMP interacts with more favorable energy to A 2B R compared with adenosine. Furthermore, AMPαS, a non-hydrolyzable form of AMP, increased calcium uptake by Chinese hamster ovary (CHO) cells expressing the human A 2B R, indicating preferential signaling via the G q pathway. Therefore, a putative AMP-A 2B R interaction is supported by the computational modeling data and the biological results suggest this interaction involves preferential G q activation. These data provide further insights into the role of purinergic signaling in the pathophysiology of renal IRI. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Computational Models Reveal a Passive Mechanism for Cell Migration in the Crypt

    PubMed Central

    Dunn, Sara-Jane; Näthke, Inke S.; Osborne, James M.

    2013-01-01

    Cell migration in the intestinal crypt is essential for the regular renewal of the epithelium, and the continued upward movement of cells is a key characteristic of healthy crypt dynamics. However, the driving force behind this migration is unknown. Possibilities include mitotic pressure, active movement driven by motility cues, or negative pressure arising from cell loss at the crypt collar. It is possible that a combination of factors together coordinate migration. Here, three different computational models are used to provide insight into the mechanisms that underpin cell movement in the crypt, by examining the consequence of eliminating cell division on cell movement. Computational simulations agree with existing experimental results, confirming that migration can continue in the absence of mitosis. Importantly, however, simulations allow us to infer mechanisms that are sufficient to generate cell movement, which is not possible through experimental observation alone. The results produced by the three models agree and suggest that cell loss due to apoptosis and extrusion at the crypt collar relieves cell compression below, allowing cells to expand and move upwards. This finding suggests that future experiments should focus on the role of apoptosis and cell extrusion in controlling cell migration in the crypt. PMID:24260407

  6. Effect of Phosphate, Fluoride, and Nitrate on Gibbsite Dissolution Rate and Solubility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herting, Daniel L.

    2014-01-29

    Laboratory tests have been completed with simulated tank waste samples to investigate the effects of phosphate, fluoride, and nitrate on the dissolution rate and equilibrium solubility of gibbsite in sodium hydroxide solution at 22 and 40{degrees}C. Results are compared to relevant literature data and to computer model predictions. The presence of sodium nitrate (3 M) caused a reduction in the rate of gibbsite dissolution in NaOH, but a modest increase in the equilibrium solubility of aluminum. The increase in solubility was not as large, though, as the increase predicted by the computer model. The presence of phosphate, either as sodiummore » phosphate or sodium fluoride phosphate, had a negligible effect on the rate of gibbsite dissolution, but caused a slight increase in aluminum solubility. The magnitude of the increased solubility, relative to the increase caused by sodium nitrate, suggests that the increase is due to ionic strength (or water activity) effects, rather than being associated with the specific ion involved. The computer model predicted that phosphate would cause a slight decrease in aluminum solubility, suggesting some Al-PO4 interaction. No evidence was found of such an interaction.« less

  7. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  8. The Handicap Principle for Trust in Computer Security, the Semantic Web and Social Networking

    NASA Astrophysics Data System (ADS)

    Ma, Zhanshan (Sam); Krings, Axel W.; Hung, Chih-Cheng

    Communication is a fundamental function of life, and it exists in almost all living things: from single-cell bacteria to human beings. Communication, together with competition and cooperation,arethree fundamental processes in nature. Computer scientists are familiar with the study of competition or 'struggle for life' through Darwin's evolutionary theory, or even evolutionary computing. They may be equally familiar with the study of cooperation or altruism through the Prisoner's Dilemma (PD) game. However, they are likely to be less familiar with the theory of animal communication. The objective of this article is three-fold: (i) To suggest that the study of animal communication, especially the honesty (reliability) of animal communication, in which some significant advances in behavioral biology have been achieved in the last three decades, should be on the verge to spawn important cross-disciplinary research similar to that generated by the study of cooperation with the PD game. One of the far-reaching advances in the field is marked by the publication of "The Handicap Principle: a Missing Piece of Darwin's Puzzle" by Zahavi (1997). The 'Handicap' principle [34][35], which states that communication signals must be costly in some proper way to be reliable (honest), is best elucidated with evolutionary games, e.g., Sir Philip Sidney (SPS) game [23]. Accordingly, we suggest that the Handicap principle may serve as a fundamental paradigm for trust research in computer science. (ii) To suggest to computer scientists that their expertise in modeling computer networks may help behavioral biologists in their study of the reliability of animal communication networks. This is largely due to the historical reason that, until the last decade, animal communication was studied with the dyadic paradigm (sender-receiver) rather than with the network paradigm. (iii) To pose several open questions, the answers to which may bear some refreshing insights to trust research in computer science, especially secure and resilient computing, the semantic web, and social networking. One important thread unifying the three aspects is the evolutionary game theory modeling or its extensions with survival analysis and agreement algorithms [19][20], which offer powerful game models for describing time-, space-, and covariate-dependent frailty (uncertainty and vulnerability) and deception (honesty).

  9. A model for the development of university curricula in nanoelectronics

    NASA Astrophysics Data System (ADS)

    Bruun, E.; Nielsen, I.

    2010-12-01

    Nanotechnology is having an increasing impact on university curricula in electrical engineering and in physics. Major influencers affecting developments in university programmes related to nanoelectronics are discussed and a model for university programme development is described. The model takes into account that nanotechnology affects not only physics but also electrical engineering and computer engineering because of the advent of new nanoelectronics devices. The model suggests that curriculum development tends to follow one of three major tracks: physics; electrical engineering; computer engineering. Examples of European curricula following this framework are identified and described. These examples may serve as sources of inspiration for future developments and the model presented may provide guidelines for a systematic selection of topics in the university programmes.

  10. Algebraic properties of automata associated to Petri nets and applications to computation in biological systems.

    PubMed

    Egri-Nagy, Attila; Nehaniv, Chrystopher L

    2008-01-01

    Biochemical and genetic regulatory networks are often modeled by Petri nets. We study the algebraic structure of the computations carried out by Petri nets from the viewpoint of algebraic automata theory. Petri nets comprise a formalized graphical modeling language, often used to describe computation occurring within biochemical and genetic regulatory networks, but the semantics may be interpreted in different ways in the realm of automata. Therefore, there are several different ways to turn a Petri net into a state-transition automaton. Here, we systematically investigate different conversion methods and describe cases where they may yield radically different algebraic structures. We focus on the existence of group components of the corresponding transformation semigroups, as these reflect symmetries of the computation occurring within the biological system under study. Results are illustrated by applications to the Petri net modelling of intermediary metabolism. Petri nets with inhibition are shown to be computationally rich, regardless of the particular interpretation method. Along these lines we provide a mathematical argument suggesting a reason for the apparent all-pervasiveness of inhibitory connections in living systems.

  11. A computational study of liposome logic: towards cellular computing from the bottom up

    PubMed Central

    Smaldon, James; Romero-Campero, Francisco J.; Fernández Trillo, Francisco; Gheorghe, Marian; Alexander, Cameron

    2010-01-01

    In this paper we propose a new bottom-up approach to cellular computing, in which computational chemical processes are encapsulated within liposomes. This “liposome logic” approach (also called vesicle computing) makes use of supra-molecular chemistry constructs, e.g. protocells, chells, etc. as minimal cellular platforms to which logical functionality can be added. Modeling and simulations feature prominently in “top-down” synthetic biology, particularly in the specification, design and implementation of logic circuits through bacterial genome reengineering. The second contribution in this paper is the demonstration of a novel set of tools for the specification, modelling and analysis of “bottom-up” liposome logic. In particular, simulation and modelling techniques are used to analyse some example liposome logic designs, ranging from relatively simple NOT gates and NAND gates to SR-Latches, D Flip-Flops all the way to 3 bit ripple counters. The approach we propose consists of specifying, by means of P systems, gene regulatory network-like systems operating inside proto-membranes. This P systems specification can be automatically translated and executed through a multiscaled pipeline composed of dissipative particle dynamics (DPD) simulator and Gillespie’s stochastic simulation algorithm (SSA). Finally, model selection and analysis can be performed through a model checking phase. This is the first paper we are aware of that brings to bear formal specifications, DPD, SSA and model checking to the problem of modeling target computational functionality in protocells. Potential chemical routes for the laboratory implementation of these simulations are also discussed thus for the first time suggesting a potentially realistic physiochemical implementation for membrane computing from the bottom-up. PMID:21886681

  12. Whole-Volume Clustering of Time Series Data from Zebrafish Brain Calcium Images via Mixture Modeling.

    PubMed

    Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L

    2018-02-01

    Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.

  13. New ARCH: Future Generation Internet Architecture

    DTIC Science & Technology

    2004-08-01

    a vocabulary to talk about a system . This provides a framework ( a “reference model ...layered model Modularity and abstraction are central tenets of Computer Science thinking. Modularity breaks a system into parts, normally to permit...this complexity is hidden. Abstraction suggests a structure for the system . A popular and simple structure is a layered model : lower layer

  14. 3D Viewer Platform of Cloud Clustering Management System: Google Map 3D

    NASA Astrophysics Data System (ADS)

    Choi, Sung-Ja; Lee, Gang-Soo

    The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].

  15. Review of selected features of the natural system model, and suggestions for applications in South Florida

    USGS Publications Warehouse

    Bales, Jerad; Fulford, Janice M.; Swain, Eric D.

    1997-01-01

    A study was conducted to review selected features of the Natural System Model, version 4.3 . The Natural System Model is a regional-scale model that uses recent climatic data and estimates of historic vegetation and topography to simulate pre-canal-drainage hydrologic response in south Florida. Equations used to represent the hydrologic system and the numerical solution of these equations in the model were documented and reviewed. Convergence testing was performed using 1965 input data, and selected other aspects of the model were evaluated.Some conclusions from the evaluation of the Natural System Model include the following observations . Simulations were generally insensitive to the temporal resolution used in the model. However, reduction of the computational cell size from 2-mile by 2-mile to 2/3-mile by 2/3-mile resulted in a decrease in spatial mean ponding depths for October of 0.35 foot for a 3-hour time step.Review of the computer code indicated that there is no limit on the amount of water that can be transferred from the river system to the overland flow system, on the amount of seepage from the river to the ground-water system, on evaporation from the river system, or on evapotranspiration from the overland-flow system . Oscillations of 0.2 foot or less in simulated river stage were identified and attributed to a volume limiting function which is applied in solution of the overland-flow equations. The computation of the resistance coefficient is not consistent with the computation of overland-flow velocity. Ground-water boundary conditions do not always ensure a no-flow condition at the boundary. These inconsistencies had varying degrees of effects on model simulations, and it is likely that simulations longer than 1 year are needed to fully identify effects. However, inconsistencies in model formulations should not be ignored, even if the effects of such errors on model results appear to be small or have not been clearly defined.The Natural System Model can be a very useful tool for estimating pre-drainage hydrologic response in south Florida. The model includes all of the important physical processes needed to simulate a water balance. With a few exceptions, these hydrologic processes are represented in a reasonable manner using empirical, semiempirical, and mechanistic relations . The data sets that have been assembled to represent physical features, and hydrologic and meteorological conditions are quite extensive in their scope.Some suggestions for model application were made. Simulation results from the Natural System Model need to be interpreted on a regional basis, rather than cell by cell. The available evidence suggests that simulated water levels should be interpreted with about a plus or minus 1 foot uncertainty. It is probably not appropriate to use the Natural System Model to estimate pre-drainage discharges (as opposed to hydroperiods and water levels) at a particular location or across a set of adjacent computational cells. All simulated results for computational cells within about 10 miles of the model boundaries have a higher degree of uncertainty than results for the interior of the model domain. It is most appropriate to interpret the Natural System Model simulation results in connection with other available information. Stronger linkages between hydrologic inputs to the Everglades and the ecological response of the system would enhance restoration efforts .

  16. A cortical edge-integration model of object-based lightness computation that explains effects of spatial context and individual differences

    PubMed Central

    Rudd, Michael E.

    2014-01-01

    Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4. PMID:25202253

  17. A cortical edge-integration model of object-based lightness computation that explains effects of spatial context and individual differences.

    PubMed

    Rudd, Michael E

    2014-01-01

    Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.

  18. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  19. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  20. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  1. Role of Statistical Random-Effects Linear Models in Personalized Medicine

    PubMed Central

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-01-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization. PMID:23467392

  2. Computational Modeling of Inflammation and Wound Healing

    PubMed Central

    Ziraldo, Cordelia; Mi, Qi; An, Gary; Vodovotz, Yoram

    2013-01-01

    Objective Inflammation is both central to proper wound healing and a key driver of chronic tissue injury via a positive-feedback loop incited by incidental cell damage. We seek to derive actionable insights into the role of inflammation in wound healing in order to improve outcomes for individual patients. Approach To date, dynamic computational models have been used to study the time evolution of inflammation in wound healing. Emerging clinical data on histo-pathological and macroscopic images of evolving wounds, as well as noninvasive measures of blood flow, suggested the need for tissue-realistic, agent-based, and hybrid mechanistic computational simulations of inflammation and wound healing. Innovation We developed a computational modeling system, Simple Platform for Agent-based Representation of Knowledge, to facilitate the construction of tissue-realistic models. Results A hybrid equation–agent-based model (ABM) of pressure ulcer formation in both spinal cord-injured and -uninjured patients was used to identify control points that reduce stress caused by tissue ischemia/reperfusion. An ABM of arterial restenosis revealed new dynamics of cell migration during neointimal hyperplasia that match histological features, but contradict the currently prevailing mechanistic hypothesis. ABMs of vocal fold inflammation were used to predict inflammatory trajectories in individuals, possibly allowing for personalized treatment. Conclusions The intertwined inflammatory and wound healing responses can be modeled computationally to make predictions in individuals, simulate therapies, and gain mechanistic insights. PMID:24527362

  3. Integrating Computational Science Tools into a Thermodynamics Course

    NASA Astrophysics Data System (ADS)

    Vieira, Camilo; Magana, Alejandra J.; García, R. Edwin; Jana, Aniruddha; Krafcik, Matthew

    2018-01-01

    Computational tools and methods have permeated multiple science and engineering disciplines, because they enable scientists and engineers to process large amounts of data, represent abstract phenomena, and to model and simulate complex concepts. In order to prepare future engineers with the ability to use computational tools in the context of their disciplines, some universities have started to integrate these tools within core courses. This paper evaluates the effect of introducing three computational modules within a thermodynamics course on student disciplinary learning and self-beliefs about computation. The results suggest that using worked examples paired to computer simulations to implement these modules have a positive effect on (1) student disciplinary learning, (2) student perceived ability to do scientific computing, and (3) student perceived ability to do computer programming. These effects were identified regardless of the students' prior experiences with computer programming.

  4. Dayglow and night glow of the Venusian upper atmosphere. Modelling and observations

    NASA Astrophysics Data System (ADS)

    Gronoff, G.; Lilensten, J.; Simon, C.; Barthélemy, M.; Leblanc, F.

    2007-08-01

    Aims. We present the modelling of the production of excited states of O, CO and N2 in the Venusian upper atmosphere, which allows to compute the nightglow emissions. In the dayside, we also compute several emissions, taking advantage of the small influence of resonant scattering for forbidden transitions. Methods. We compute the photoionisation and the photodissociation mechanisms, and thus the photoelectron production. We compute electron impact excitation and ionisation through a multi-stream stationary kinetic transport code. Finally, we compute the ion recombination with a stationary chemical model. Results.We predict altitude density profiles for O(1S) and O(1D) states and the emissions corresponding to their different transitions. They are found to be very comparable to the observations without the need for stratospheric emissions. In the nightside, we highlight the role of the N + O+2 reaction in the creation of the O(1S) state. This reaction has been suggested by Rees in 1975 (Frederick, 1976). It has been discussed several times afterwhile and in spite of different studies, is still controversial. However, when we take it in consideration in Venus, it is shown to be the cause of almost 90% of the state production. We calculate the production intensities of O(3S) and O(5S) states, which are needed for radiative transfer models. For CO we compute the Cameron band and the fourth positive band emissions. For N2 we compute the LBH, first and second positive bands. All these values are successfully compared to the experiment when data are available. Conclusions. For the first time, a comprehensive model is proposed to compute dayglow and nightglow emissions of the Venusian upper atmosphere. It relies on previous works with noticeable improvements, both on the transport and on the chemical aspects. In the near future, a radiative transfer model will be used to compute optically thick lines in the dayglow, and a fluid model will be added to compute ionosphere densities and temperatures. We will present the first observational results from the Pic du Midi telescope in June 2007, in order to compare with our modelling.

  5. Contextuality supplies the 'magic' for quantum computation.

    PubMed

    Howard, Mark; Wallman, Joel; Veitch, Victor; Emerson, Joseph

    2014-06-19

    Quantum computers promise dramatic advantages over their classical counterparts, but the source of the power in quantum computing has remained elusive. Here we prove a remarkable equivalence between the onset of contextuality and the possibility of universal quantum computation via 'magic state' distillation, which is the leading model for experimentally realizing a fault-tolerant quantum computer. This is a conceptually satisfying link, because contextuality, which precludes a simple 'hidden variable' model of quantum mechanics, provides one of the fundamental characterizations of uniquely quantum phenomena. Furthermore, this connection suggests a unifying paradigm for the resources of quantum information: the non-locality of quantum theory is a particular kind of contextuality, and non-locality is already known to be a critical resource for achieving advantages with quantum communication. In addition to clarifying these fundamental issues, this work advances the resource framework for quantum computation, which has a number of practical applications, such as characterizing the efficiency and trade-offs between distinct theoretical and experimental schemes for achieving robust quantum computation, and putting bounds on the overhead cost for the classical simulation of quantum algorithms.

  6. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  7. Experimental investigations, modeling, and analyses of high-temperature devices for space applications: Part 1. Final report, June 1996--December 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournier, J.; El-Genk, M.S.; Huang, L.

    1999-01-01

    The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less

  8. Experimental investigations, modeling, and analyses of high-temperature devices for space applications: Part 2. Final report, June 1996--December 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournier, J.; El-Genk, M.S.; Huang, L.

    1999-01-01

    The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less

  9. Computational Knee Ligament Modeling Using Experimentally Determined Zero-Load Lengths

    PubMed Central

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models. PMID:22523522

  10. Comparison of Dam Breach Parameter Estimators

    DTIC Science & Technology

    2008-01-01

    of the methods, when used in the HEC - RAS simulation model , produced comparable results. The methods tested suggest use of ...characteristics of a dam breach, use of those parameters within the unsteady flow routing model HEC - RAS , and the computation and display of the resulting...implementation of these breach parameters in

  11. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  12. Examination of the Effects of Dimensionality on Cognitive Processing in Science: A Computational Modeling Experiment Comparing Online Laboratory Simulations and Serious Educational Games

    NASA Astrophysics Data System (ADS)

    Lamb, Richard L.

    2016-02-01

    Within the last 10 years, new tools for assisting in the teaching and learning of academic skills and content within the context of science have arisen. These new tools include multiple types of computer software and hardware to include (video) games. The purpose of this study was to examine and compare the effect of computer learning games in the form of three-dimensional serious educational games, two-dimensional online laboratories, and traditional lecture-based instruction in the context of student content learning in science. In particular, this study examines the impact of dimensionality, or the ability to move along the X-, Y-, and Z-axis in the games. Study subjects ( N = 551) were randomly selected using a stratified sampling technique. Independent strata subsamples were developed based upon the conditions of serious educational games, online laboratories, and lecture. The study also computationally models a potential mechanism of action and compares two- and three-dimensional learning environments. F test results suggest a significant difference for the main effect of condition across the factor of content gain score with large effect. Overall, comparisons using computational models suggest that three-dimensional serious educational games increase the level of success in learning as measured with content examinations through greater recruitment and attributional retraining of cognitive systems. The study supports assertions in the literature that the use of games in higher dimensions (i.e., three-dimensional versus two-dimensional) helps to increase student understanding of science concepts.

  13. A neuron-astrocyte transistor-like model for neuromorphic dressed neurons.

    PubMed

    Valenza, G; Pioggia, G; Armato, A; Ferro, M; Scilingo, E P; De Rossi, D

    2011-09-01

    Experimental evidences on the role of the synaptic glia as an active partner together with the bold synapse in neuronal signaling and dynamics of neural tissue strongly suggest to investigate on a more realistic neuron-glia model for better understanding human brain processing. Among the glial cells, the astrocytes play a crucial role in the tripartite synapsis, i.e. the dressed neuron. A well-known two-way astrocyte-neuron interaction can be found in the literature, completely revising the purely supportive role for the glia. The aim of this study is to provide a computationally efficient model for neuron-glia interaction. The neuron-glia interactions were simulated by implementing the Li-Rinzel model for an astrocyte and the Izhikevich model for a neuron. Assuming the dressed neuron dynamics similar to the nonlinear input-output characteristics of a bipolar junction transistor, we derived our computationally efficient model. This model may represent the fundamental computational unit for the development of real-time artificial neuron-glia networks opening new perspectives in pattern recognition systems and in brain neurophysiology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. CLIMACS: a computer model of forest stand development for western Oregon and Washington.

    Treesearch

    Virginia H. Dale; Miles Hemstrom

    1984-01-01

    A simulation model for the development of timber stands in the Pacific Northwest is described. The model grows individual trees of 21 species in a 0.20-hectare (0.08-acre) forest gap. The model provides a means of assimilating existing information, indicates where knowledge is deficient, suggests where the forest system is most sensitive, and provides a first testing...

  15. Long-term predictive capability of erosion models

    NASA Technical Reports Server (NTRS)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  16. Top-down methodology for human factors research

    NASA Technical Reports Server (NTRS)

    Sibert, J.

    1983-01-01

    User computer interaction as a conversation is discussed. The design of user interfaces which depends on viewing communications between a user and the computer as a conversion is presented. This conversation includes inputs to the computer (outputs from the user), outputs from the computer (inputs to the user), and the sequencing in both time and space of those outputs and inputs. The conversation is viewed from the user's side of the conversation. Two languages are modeled: the one with which the user communicates with the computer and the language where communication flows from the computer to the user. Both languages exist on three levels; the semantic, syntactic and lexical. It is suggested that natural languages can also be considered in these terms.

  17. Critical assessment of Reynolds stress turbulence models using homogeneous flows

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir; Shih, Tsan-Hsing

    1992-01-01

    In modeling the rapid part of the pressure correlation term in the Reynolds stress transport equations, extensive use has been made of its exact properties which were first suggested by Rotta. These, for example, have been employed in obtaining the widely used Launder, Reece and Rodi (LRR) model. Some recent proposals have dropped one of these properties to obtain new models. We demonstrate, by computing some simple homogeneous flows, that doing so does not lead to any significant improvements over the LRR model and it is not the right direction in improving the performance of existing models. The reason for this, in our opinion, is that violation of one of the exact properties can not bring in any new physics into the model. We compute thirteen homogeneous flows using LRR (with a recalibrated rapid term constant), IP and SSG models. The flows computed include the flow through axisymmetric contraction; axisymmetric expansion; distortion by plane strain; and homogeneous shear flows with and without rotation. Results show that for most general representation for a model linear in the anisotropic tensor, performs either better or as good as the other two models of the same level.

  18. Experimental and Computational Analysis of Polyglutamine-Mediated Cytotoxicity

    PubMed Central

    Tang, Matthew Y.; Proctor, Carole J.; Woulfe, John; Gray, Douglas A.

    2010-01-01

    Expanded polyglutamine (polyQ) proteins are known to be the causative agents of a number of human neurodegenerative diseases but the molecular basis of their cytoxicity is still poorly understood. PolyQ tracts may impede the activity of the proteasome, and evidence from single cell imaging suggests that the sequestration of polyQ into inclusion bodies can reduce the proteasomal burden and promote cell survival, at least in the short term. The presence of misfolded protein also leads to activation of stress kinases such as p38MAPK, which can be cytotoxic. The relationships of these systems are not well understood. We have used fluorescent reporter systems imaged in living cells, and stochastic computer modeling to explore the relationships of polyQ, p38MAPK activation, generation of reactive oxygen species (ROS), proteasome inhibition, and inclusion body formation. In cells expressing a polyQ protein inclusion, body formation was preceded by proteasome inhibition but cytotoxicity was greatly reduced by administration of a p38MAPK inhibitor. Computer simulations suggested that without the generation of ROS, the proteasome inhibition and activation of p38MAPK would have significantly reduced toxicity. Our data suggest a vicious cycle of stress kinase activation and proteasome inhibition that is ultimately lethal to cells. There was close agreement between experimental data and the predictions of a stochastic computer model, supporting a central role for proteasome inhibition and p38MAPK activation in inclusion body formation and ROS-mediated cell death. PMID:20885783

  19. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  20. Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris

    2015-11-01

    Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network and potentially contributes to development of improved therapy for neurological disorders such as Parkinson's disease.

  1. Classroom Tips.

    ERIC Educational Resources Information Center

    Crain, Cheryl

    1994-01-01

    Presents six teaching ideas from teachers in Foothills Schools, Alberta, Canada. Includes suggested activities on local government, computer uses in social studies, Canadian history, current events, and world studies. Provides models of a passport application, passports, and visas. (CFR)

  2. Application of NASTRAN to propeller-induced ship vibration

    NASA Technical Reports Server (NTRS)

    Liepins, A. A.; Conaway, J. H.

    1975-01-01

    An application of the NASTRAN program to the analysis of propeller-induced ship vibration is presented. The essentials of the model, the computational procedure, and experience are described. Desirable program enhancements are suggested.

  3. Daily computer usage correlated with undergraduate students' musculoskeletal symptoms.

    PubMed

    Chang, Che-Hsu Joe; Amick, Benjamin C; Menendez, Cammie Chaumont; Katz, Jeffrey N; Johnson, Peter W; Robertson, Michelle; Dennerlein, Jack Tigh

    2007-06-01

    A pilot prospective study was performed to examine the relationships between daily computer usage time and musculoskeletal symptoms on undergraduate students. For three separate 1-week study periods distributed over a semester, 27 students reported body part-specific musculoskeletal symptoms three to five times daily. Daily computer usage time for the 24-hr period preceding each symptom report was calculated from computer input device activities measured directly by software loaded on each participant's primary computer. General Estimating Equation models tested the relationships between daily computer usage and symptom reporting. Daily computer usage longer than 3 hr was significantly associated with an odds ratio 1.50 (1.01-2.25) of reporting symptoms. Odds of reporting symptoms also increased with quartiles of daily exposure. These data suggest a potential dose-response relationship between daily computer usage time and musculoskeletal symptoms.

  4. Estimation of aneurysm wall stresses created by treatment with a shape memory polymer foam device

    PubMed Central

    Hwang, Wonjun; Volk, Brent L.; Akberali, Farida; Singhal, Pooja; Criscione, John C.

    2012-01-01

    In this study, compliant latex thin-walled aneurysm models are fabricated to investigate the effects of expansion of shape memory polymer foam. A simplified cylindrical model is selected for the in-vitro aneurysm, which is a simplification of a real, saccular aneurysm. The studies are performed by crimping shape memory polymer foams, originally 6 and 8 mm in diameter, and monitoring the resulting deformation when deployed into 4-mm-diameter thin-walled latex tubes. The deformations of the latex tubes are used as inputs to physical, analytical, and computational models to estimate the circumferential stresses. Using the results of the stress analysis in the latex aneurysm model, a computational model of the human aneurysm is developed by changing the geometry and material properties. The model is then used to predict the stresses that would develop in a human aneurysm. The experimental, simulation, and analytical results suggest that shape memory polymer foams have potential of being a safe treatment for intracranial saccular aneurysms. In particular, this work suggests oversized shape memory foams may be used to better fill the entire aneurysm cavity while generating stresses below the aneurysm wall breaking stresses. PMID:21901546

  5. Computer Aided Diagnostic Support System for Skin Cancer: A Review of Techniques and Algorithms

    PubMed Central

    Masood, Ammara; Al-Jumaily, Adel Ali

    2013-01-01

    Image-based computer aided diagnosis systems have significant potential for screening and early detection of malignant melanoma. We review the state of the art in these systems and examine current practices, problems, and prospects of image acquisition, pre-processing, segmentation, feature extraction and selection, and classification of dermoscopic images. This paper reports statistics and results from the most important implementations reported to date. We compared the performance of several classifiers specifically developed for skin lesion diagnosis and discussed the corresponding findings. Whenever available, indication of various conditions that affect the technique's performance is reported. We suggest a framework for comparative assessment of skin cancer diagnostic models and review the results based on these models. The deficiencies in some of the existing studies are highlighted and suggestions for future research are provided. PMID:24575126

  6. Bayesian Latent Class Analysis Tutorial.

    PubMed

    Li, Yuelin; Lord-Bessen, Jennifer; Shiyko, Mariya; Loeb, Rebecca

    2018-01-01

    This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.

  7. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part I: boron neutron capture therapy models.

    PubMed

    Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.

  8. Modular Bundle Adjustment for Photogrammetric Computations

    NASA Astrophysics Data System (ADS)

    Börlin, N.; Murtiyoso, A.; Grussenmeyer, P.; Menna, F.; Nocerino, E.

    2018-05-01

    In this paper we investigate how the residuals in bundle adjustment can be split into a composition of simple functions. According to the chain rule, the Jacobian (linearisation) of the residual can be formed as a product of the Jacobians of the individual steps. When implemented, this enables a modularisation of the computation of the bundle adjustment residuals and Jacobians where each component has limited responsibility. This enables simple replacement of components to e.g. implement different projection or rotation models by exchanging a module. The technique has previously been used to implement bundle adjustment in the open-source package DBAT (Börlin and Grussenmeyer, 2013) based on the Photogrammetric and Computer Vision interpretations of Brown (1971) lens distortion model. In this paper, we applied the technique to investigate how affine distortions can be used to model the projection of a tilt-shift lens. Two extended distortion models were implemented to test the hypothesis that the ordering of the affine and lens distortion steps can be changed to reduce the size of the residuals of a tilt-shift lens calibration. Results on synthetic data confirm that the ordering of the affine and lens distortion steps matter and is detectable by DBAT. However, when applied to a real camera calibration data set of a tilt-shift lens, no difference between the extended models was seen. This suggests that the tested hypothesis is false and that other effects need to be modelled to better explain the projection. The relatively low implementation effort that was needed to generate the models suggest that the technique can be used to investigate other novel projection models in photogrammetry, including modelling changes in the 3D geometry to better understand the tilt-shift lens.

  9. Dual learning processes underlying human decision-making in reversal learning tasks: functional significance and evidence from the model fit to human behavior

    PubMed Central

    Bai, Yu; Katahira, Kentaro; Ohira, Hideki

    2014-01-01

    Humans are capable of correcting their actions based on actions performed in the past, and this ability enables them to adapt to a changing environment. The computational field of reinforcement learning (RL) has provided a powerful explanation for understanding such processes. Recently, the dual learning system, modeled as a hybrid model that incorporates value update based on reward-prediction error and learning rate modulation based on the surprise signal, has gained attention as a model for explaining various neural signals. However, the functional significance of the hybrid model has not been established. In the present study, we used computer simulation in a reversal learning task to address functional significance in a probabilistic reversal learning task. The hybrid model was found to perform better than the standard RL model in a large parameter setting. These results suggest that the hybrid model is more robust against the mistuning of parameters compared with the standard RL model when decision-makers continue to learn stimulus-reward contingencies, which can create abrupt changes. The parameter fitting results also indicated that the hybrid model fit better than the standard RL model for more than 50% of the participants, which suggests that the hybrid model has more explanatory power for the behavioral data than the standard RL model. PMID:25161635

  10. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    PubMed

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.

  11. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  12. A critical review of the allocentric spatial representation and its neural underpinnings: toward a network-based perspective

    PubMed Central

    Ekstrom, Arne D.; Arnold, Aiden E. G. F.; Iaria, Giuseppe

    2014-01-01

    While the widely studied allocentric spatial representation holds a special status in neuroscience research, its exact nature and neural underpinnings continue to be the topic of debate, particularly in humans. Here, based on a review of human behavioral research, we argue that allocentric representations do not provide the kind of map-like, metric representation one might expect based on past theoretical work. Instead, we suggest that almost all tasks used in past studies involve a combination of egocentric and allocentric representation, complicating both the investigation of the cognitive basis of an allocentric representation and the task of identifying a brain region specifically dedicated to it. Indeed, as we discuss in detail, past studies suggest numerous brain regions important to allocentric spatial memory in addition to the hippocampus, including parahippocampal, retrosplenial, and prefrontal cortices. We thus argue that although allocentric computations will often require the hippocampus, particularly those involving extracting details across temporally specific routes, the hippocampus is not necessary for all allocentric computations. We instead suggest that a non-aggregate network process involving multiple interacting brain areas, including hippocampus and extra-hippocampal areas such as parahippocampal, retrosplenial, prefrontal, and parietal cortices, better characterizes the neural basis of spatial representation during navigation. According to this model, an allocentric representation does not emerge from the computations of a single brain region (i.e., hippocampus) nor is it readily decomposable into additive computations performed by separate brain regions. Instead, an allocentric representation emerges from computations partially shared across numerous interacting brain regions. We discuss our non-aggregate network model in light of existing data and provide several key predictions for future experiments. PMID:25346679

  13. Field verification of reconstructed dam-break flood, Laurel Run, Pennsylvania

    USGS Publications Warehouse

    Chen, Cheng-lung; Armbruster, Jeffrey T.

    1979-01-01

    A one-dimensional dam-break flood routing model is verified by using observed data on the flash flood resulting from the failure of Laurel Run Reservoir Dam near Johnstown, Pennsylvania. The model has been developed on the basis of an explicit scheme of the characteristics method with specified time intervals. The model combines one of the characteristic equations with the Rankine-Hugoniot shock equations to trace the corresponding characteristic backward to the known state for solving the depth and velocity of flow at the wave front. The previous version of the model has called for a modification of the method of solution to overcome the computational difficulty at the narrow breach and at any geomorphological constraints where channel geometry changes rapidly. The large reduction in the computational inaccuracies and oscillations was achieved by introducing the actual "storage width" in the equation of continuity and the imaginary "conveyance width" in the equation of motion. Close agreement between observed and computed peak stages at several stations downstream of the dam strongly suggests the validity and applicability of the model. However, small numerical noise appearing in the computed stage and discharge hydrographs at the dam site as well as discrepancy of attenuated peaks in the discharge hydrographs indicate the need for further model improvement.

  14. Examining human behavior in video games: The development of a computational model to measure aggression.

    PubMed

    Lamb, Richard; Annetta, Leonard; Hoston, Douglas; Shapiro, Marina; Matthews, Benjamin

    2018-06-01

    Video games with violent content have raised considerable concern in popular media and within academia. Recently, there has been considerable attention regarding the claim of the relationship between aggression and video game play. The authors of this study propose the use of a new class of tools developed via computational models to allow examination of the question of whether there is a relationship between violent video games and aggression. The purpose of this study is to computationally model and compare the General Aggression Model with the Diathesis Mode of Aggression related to the play of violent content in video games. A secondary purpose is to provide a method of measuring and examining individual aggression arising from video game play. Total participants examined for this study are N = 1065. This study occurs in three phases. Phase 1 is the development and quantification of the profile combination of traits via latent class profile analysis. Phase 2 is the training of the artificial neural network. Phase 3 is the comparison of each model as a computational model with and without the presence of video game violence. Results suggest that a combination of environmental factors and genetic predispositions trigger aggression related to video games.

  15. Computational Modeling of 3D Tumor Growth and Angiogenesis for Chemotherapy Evaluation

    PubMed Central

    Tang, Lei; van de Ven, Anne L.; Guo, Dongmin; Andasari, Vivi; Cristini, Vittorio; Li, King C.; Zhou, Xiaobo

    2014-01-01

    Solid tumors develop abnormally at spatial and temporal scales, giving rise to biophysical barriers that impact anti-tumor chemotherapy. This may increase the expenditure and time for conventional drug pharmacokinetic and pharmacodynamic studies. In order to facilitate drug discovery, we propose a mathematical model that couples three-dimensional tumor growth and angiogenesis to simulate tumor progression for chemotherapy evaluation. This application-oriented model incorporates complex dynamical processes including cell- and vascular-mediated interstitial pressure, mass transport, angiogenesis, cell proliferation, and vessel maturation to model tumor progression through multiple stages including tumor initiation, avascular growth, and transition from avascular to vascular growth. Compared to pure mechanistic models, the proposed empirical methods are not only easy to conduct but can provide realistic predictions and calculations. A series of computational simulations were conducted to demonstrate the advantages of the proposed comprehensive model. The computational simulation results suggest that solid tumor geometry is related to the interstitial pressure, such that tumors with high interstitial pressure are more likely to develop dendritic structures than those with low interstitial pressure. PMID:24404145

  16. Research and development activities in unified control-structure modeling and design

    NASA Technical Reports Server (NTRS)

    Nayak, A. P.

    1985-01-01

    Results of work to develop a unified control/structures modeling and design capability for large space structures modeling are presented. Recent analytical results are presented to demonstrate the significant interdependence between structural and control properties. A new design methodology is suggested in which the structure, material properties, dynamic model and control design are all optimized simultaneously. Parallel research done by other researchers is reviewed. The development of a methodology for global design optimization is recommended as a long-term goal. It is suggested that this methodology should be incorporated into computer aided engineering programs, which eventually will be supplemented by an expert system to aid design optimization.

  17. Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena

    2010-09-30

    Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less

  18. The Prospects of Whole Brain Emulation within the next Half- Century

    NASA Astrophysics Data System (ADS)

    Eth, Daniel; Foust, Juan-Carlos; Whale, Brandon

    2013-12-01

    Whole Brain Emulation (WBE), the theoretical technology of modeling a human brain in its entirety on a computer-thoughts, feelings, memories, and skills intact-is a staple of science fiction. Recently, proponents of WBE have suggested that it will be realized in the next few decades. In this paper, we investigate the plausibility of WBE being developed in the next 50 years (by 2063). We identify four essential requisite technologies: scanning the brain, translating the scan into a model, running the model on a computer, and simulating an environment and body. Additionally, we consider the cultural and social effects of WBE. We find the two most uncertain factors for WBE's future to be the development of advanced miniscule probes that can amass neural data in vivo and the degree to which the culture surrounding WBE becomes cooperative or competitive. We identify four plausible scenarios from these uncertainties and suggest the most likely scenario to be one in which WBE is realized, and the technology is used for moderately cooperative ends

  19. Real-time human collaboration monitoring and intervention

    DOEpatents

    Merkle, Peter B.; Johnson, Curtis M.; Jones, Wendell B.; Yonas, Gerold; Doser, Adele B.; Warner, David J.

    2010-07-13

    A method of and apparatus for monitoring and intervening in, in real time, a collaboration between a plurality of subjects comprising measuring indicia of physiological and cognitive states of each of the plurality of subjects, communicating the indicia to a monitoring computer system, with the monitoring computer system, comparing the indicia with one or more models of previous collaborative performance of one or more of the plurality of subjects, and with the monitoring computer system, employing the results of the comparison to communicate commands or suggestions to one or more of the plurality of subjects.

  20. Galilean satellite geomorphology

    NASA Technical Reports Server (NTRS)

    Malin, M. C.

    1983-01-01

    Research on this task consisted of the development and initial application of photometric and photoclinometric models using interactive computer image processing and graphics. New programs were developed to compute viewing and illumination angles for every picture element in a Voyager image using C-matrices and final Voyager ephemerides. These values were then used to transform each pixel to an illumination-oriented coordinate system. An iterative integration routine permits slope displacements to be computed from brightness variations, and correlated in the cross-sun direction, resulting in two dimensional topographic data. Figure 1 shows a 'wire-mesh' view of an impact crater on Ganymede, shown with a 10-fold vertical exaggeration. The crater, about 20 km in diameter, has a central mound and raised interior floor suggestive of viscous relaxation and rebound of the crater's topography. In addition to photoclinometry, the computer models that have been developed permit an examination on non-topographically-derived variations in surface brightness.

  1. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  2. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    PubMed

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  3. A theoretical study of mixing downstream of transverse injection into a supersonic boundary layer

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Zelazny, S. W.

    1972-01-01

    A theoretical and analytical study was made of mixing downstream of transverse hydrogen injection, from single and multiple orifices, into a Mach 4 air boundary layer over a flat plate. Numerical solutions to the governing three-dimensional, elliptic boundary layer equations were obtained using a general purpose computer program. Founded upon a finite element solution algorithm. A prototype three-dimensional turbulent transport model was developed using mixing length theory in the wall region and the mass defect concept in the outer region. Excellent agreement between the computed flow field and experimental data for a jet/freestream dynamic pressure ratio of unity was obtained in the centerplane region of the single-jet configuration. Poorer agreement off centerplane suggests an inadequacy of the extrapolated two-dimensional turbulence model. Considerable improvement in off-centerplane computational agreement occured for a multi-jet configuration, using the same turbulent transport model.

  4. Studies with spike initiators - Linearization by noise allows continuous signal modulation in neural networks

    NASA Technical Reports Server (NTRS)

    Yu, Xiaolong; Lewis, Edwin R.

    1989-01-01

    It is shown that noise can be an important element in the translation of neuronal generator potentials (summed inputs) to neuronal spike trains (outputs), creating or expanding a range of amplitudes over which the spike rate is proportional to the generator potential amplitude. Noise converts the basically nonlinear operation of a spike initiator into a nearly linear modulation process. This linearization effect of noise is examined in a simple intuitive model of a static threshold and in a more realistic computer simulation of spike initiator based on the Hodgkin-Huxley (HH) model. The results are qualitatively similar; in each case larger noise amplitude results in a larger range of nearly linear modulation. The computer simulation of the HH model with noise shows linear and nonlinear features that were earlier observed in spike data obtained from the VIIIth nerve of the bullfrog. This suggests that these features can be explained in terms of spike initiator properties, and it also suggests that the HH model may be useful for representing basic spike initiator properties in vertebrates.

  5. Too Good to be True? Ideomotor Theory from a Computational Perspective

    PubMed Central

    Herbort, Oliver; Butz, Martin V.

    2012-01-01

    In recent years, Ideomotor Theory has regained widespread attention and sparked the development of a number of theories on goal-directed behavior and learning. However, there are two issues with previous studies’ use of Ideomotor Theory. Although Ideomotor Theory is seen as very general, it is often studied in settings that are considerably more simplistic than most natural situations. Moreover, Ideomotor Theory’s claim that effect anticipations directly trigger actions and that action-effect learning is based on the formation of direct action-effect associations is hard to address empirically. We address these points from a computational perspective. A simple computational model of Ideomotor Theory was tested in tasks with different degrees of complexity. The model evaluation showed that Ideomotor Theory is a computationally feasible approach for understanding efficient action-effect learning for goal-directed behavior if the following preconditions are met: (1) The range of potential actions and effects has to be restricted. (2) Effects have to follow actions within a short time window. (3) Actions have to be simple and may not require sequencing. The first two preconditions also limit human performance and thus support Ideomotor Theory. The last precondition can be circumvented by extending the model with more complex, indirect action generation processes. In conclusion, we suggest that Ideomotor Theory offers a comprehensive framework to understand action-effect learning. However, we also suggest that additional processes may mediate the conversion of effect anticipations into actions in many situations. PMID:23162524

  6. A Formal Valuation Framework for Emotions and Their Control.

    PubMed

    Huys, Quentin J M; Renz, Daniel

    2017-09-15

    Computational psychiatry aims to apply mathematical and computational techniques to help improve psychiatric care. To achieve this, the phenomena under scrutiny should be within the scope of formal methods. As emotions play an important role across many psychiatric disorders, such computational methods must encompass emotions. Here, we consider formal valuation accounts of emotions. We focus on the fact that the flexibility of emotional responses and the nature of appraisals suggest the need for a model-based valuation framework for emotions. However, resource limitations make plain model-based valuation impossible and require metareasoning strategies to apportion cognitive resources adaptively. We argue that emotions may implement such metareasoning approximations by restricting the range of behaviors and states considered. We consider the processes that guide the deployment of the approximations, discerning between innate, model-free, heuristic, and model-based controllers. A formal valuation and metareasoning framework may thus provide a principled approach to examining emotions. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  7. Design of Bioprosthetic Aortic Valves using biaxial test data.

    PubMed

    Dabiri, Y; Paulson, K; Tyberg, J; Ronsky, J; Ali, I; Di Martino, E; Narine, K

    2015-01-01

    Bioprosthetic Aortic Valves (BAVs) do not have the serious limitations of mechanical aortic valves in terms of thrombosis. However, the lifetime of BAVs is too short, often requiring repeated surgeries. The lifetime of BAVs might be improved by using computer simulations of the structural behavior of the leaflets. The goal of this study was to develop a numerical model applicable to the optimization of durability of BAVs. The constitutive equations were derived using biaxial tensile tests. Using a Fung model, stress and strain data were computed from biaxial test data. SolidWorks was used to develop the geometry of the leaflets, and ABAQUS finite element software package was used for finite element calculations. Results showed the model is consistent with experimental observations. Reaction forces computed by the model corresponded with experimental measurements when the biaxial test was simulated. As well, the location of maximum stresses corresponded to the locations of frequent tearing of BAV leaflets. Results suggest that BAV design can be optimized with respect to durability.

  8. Computational fluid dynamics modelling of hydraulics and sedimentation in process reactors during aeration tank settling.

    PubMed

    Jensen, M D; Ingildsen, P; Rasmussen, M R; Laursen, J

    2006-01-01

    Aeration tank settling is a control method allowing settling in the process tank during high hydraulic load. The control method is patented. Aeration tank settling has been applied in several waste water treatment plants using the present design of the process tanks. Some process tank designs have shown to be more effective than others. To improve the design of less effective plants, computational fluid dynamics (CFD) modelling of hydraulics and sedimentation has been applied. This paper discusses the results at one particular plant experiencing problems with partly short-circuiting of the inlet and outlet causing a disruption of the sludge blanket at the outlet and thereby reducing the retention of sludge in the process tank. The model has allowed us to establish a clear picture of the problems arising at the plant during aeration tank settling. Secondly, several process tank design changes have been suggested and tested by means of computational fluid dynamics modelling. The most promising design changes have been found and reported.

  9. Ad Hoc modeling, expert problem solving, and R&T program evaluation

    NASA Technical Reports Server (NTRS)

    Silverman, B. G.; Liebowitz, J.; Moustakis, V. S.

    1983-01-01

    A simplified cost and time (SCAT) analysis program utilizing personal-computer technology is presented and demonstrated in the case of the NASA-Goddard end-to-end data system. The difficulties encountered in implementing complex program-selection and evaluation models in the research and technology field are outlined. The prototype SCAT system described here is designed to allow user-friendly ad hoc modeling in real time and at low cost. A worksheet constructed on the computer screen displays the critical parameters and shows how each is affected when one is altered experimentally. In the NASA case, satellite data-output and control requirements, ground-facility data-handling capabilities, and project priorities are intricately interrelated. Scenario studies of the effects of spacecraft phaseout or new spacecraft on throughput and delay parameters are shown. The use of a network of personal computers for higher-level coordination of decision-making processes is suggested, as a complement or alternative to complex large-scale modeling.

  10. Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel Adrian; Oprisan, Ana

    2005-03-01

    Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells — EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.

  11. Communication: Symmetrical quasi-classical analysis of linear optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Provazza, Justin; Coker, David F.

    2018-05-01

    The symmetrical quasi-classical approach for propagation of a many degree of freedom density matrix is explored in the context of computing linear spectra. Calculations on a simple two state model for which exact results are available suggest that the approach gives a qualitative description of peak positions, relative amplitudes, and line broadening. Short time details in the computed dipole autocorrelation function result in exaggerated tails in the spectrum.

  12. Testing the role of reward and punishment sensitivity in avoidance behavior: a computational modeling approach

    PubMed Central

    Sheynin, Jony; Moustafa, Ahmed A.; Beck, Kevin D.; Servatius, Richard J.; Myers, Catherine E.

    2015-01-01

    Exaggerated avoidance behavior is a predominant symptom in all anxiety disorders and its degree often parallels the development and persistence of these conditions. Both human and non-human animal studies suggest that individual differences as well as various contextual cues may impact avoidance behavior. Specifically, we have recently shown that female sex and inhibited temperament, two anxiety vulnerability factors, are associated with greater duration and rate of the avoidance behavior, as demonstrated on a computer-based task closely related to common rodent avoidance paradigms. We have also demonstrated that avoidance is attenuated by the administration of explicit visual signals during “non-threat” periods (i.e., safety signals). Here, we use a reinforcement-learning network model to investigate the underlying mechanisms of these empirical findings, with a special focus on distinct reward and punishment sensitivities. Model simulations suggest that sex and inhibited temperament are associated with specific aspects of these sensitivities. Specifically, differences in relative sensitivity to reward and punishment might underlie the longer avoidance duration demonstrated by females, whereas higher sensitivity to punishment might underlie the higher avoidance rate demonstrated by inhibited individuals. Simulations also suggest that safety signals attenuate avoidance behavior by strengthening the competing approach response. Lastly, several predictions generated by the model suggest that extinction-based cognitive-behavioral therapies might benefit from the use of safety signals, especially if given to individuals with high reward sensitivity and during longer safe periods. Overall, this study is the first to suggest cognitive mechanisms underlying the greater avoidance behavior observed in healthy individuals with different anxiety vulnerabilities. PMID:25639540

  13. A computationally efficient ductile damage model accounting for nucleation and micro-inertia at high triaxialities

    DOE PAGES

    Versino, Daniele; Bronkhorst, Curt Allan

    2018-01-31

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  14. Feedforward object-vision models only tolerate small image variations compared to human

    PubMed Central

    Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2014-01-01

    Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986

  15. Density functional calculations of (55)Mn, (14)N and (13)C electron paramagnetic resonance parameters support an energetically feasible model system for the S(2) state of the oxygen-evolving complex of photosystem II.

    PubMed

    Schinzel, Sandra; Schraut, Johannes; Arbuznikov, Alexei V; Siegbahn, Per E M; Kaupp, Martin

    2010-09-10

    Metal and ligand hyperfine couplings of a previously suggested, energetically feasible Mn(4)Ca model cluster (SG2009(-1)) for the S(2) state of the oxygen-evolving complex (OEC) of photosystem II (PSII) have been studied by broken-symmetry density functional methods and compared with other suggested structural and spectroscopic models. This was carried out explicitly for different spin-coupling patterns of the S=1/2 ground state of the Mn(III)(Mn(IV))(3) cluster. By applying spin-projection techniques and a scaling of the manganese hyperfine couplings, computation of the hyperfine and nuclear quadrupole coupling parameters allows a direct evaluation of the proposed models in comparison with data obtained from the simulation of EPR, ENDOR, and ESEEM spectra. The computation of (55)Mn hyperfine couplings (HFCs) for SG2009(-1) gives excellent agreement with experiment. However, at the current level of spin projection, the (55)Mn HFCs do not appear sufficiently accurate to distinguish between different structural models. Yet, of all the models studied, SG2009(-1) is the only one with the Mn(III) site at the Mn(C) center, which is coordinated by histidine (D1-His332). The computed histidine (14)N HFC anisotropy for SG2009(-1) gives much better agreement with ESEEM data than the other models, in which Mn(C) is an Mn(IV) site, thus supporting the validity of the model. The (13)C HFCs of various carboxylates have been compared with (13)C ENDOR data for PSII preparations with (13)C-labelled alanine.

  16. Statistical Comparison of Spike Responses to Natural Stimuli in Monkey Area V1 With Simulated Responses of a Detailed Laminar Network Model for a Patch of V1

    PubMed Central

    Schuch, Klaus; Logothetis, Nikos K.; Maass, Wolfgang

    2011-01-01

    A major goal of computational neuroscience is the creation of computer models for cortical areas whose response to sensory stimuli resembles that of cortical areas in vivo in important aspects. It is seldom considered whether the simulated spiking activity is realistic (in a statistical sense) in response to natural stimuli. Because certain statistical properties of spike responses were suggested to facilitate computations in the cortex, acquiring a realistic firing regimen in cortical network models might be a prerequisite for analyzing their computational functions. We present a characterization and comparison of the statistical response properties of the primary visual cortex (V1) in vivo and in silico in response to natural stimuli. We recorded from multiple electrodes in area V1 of 4 macaque monkeys and developed a large state-of-the-art network model for a 5 × 5-mm patch of V1 composed of 35,000 neurons and 3.9 million synapses that integrates previously published anatomical and physiological details. By quantitative comparison of the model response to the “statistical fingerprint” of responses in vivo, we find that our model for a patch of V1 responds to the same movie in a way which matches the statistical structure of the recorded data surprisingly well. The deviation between the firing regimen of the model and the in vivo data are on the same level as deviations among monkeys and sessions. This suggests that, despite strong simplifications and abstractions of cortical network models, they are nevertheless capable of generating realistic spiking activity. To reach a realistic firing state, it was not only necessary to include both N-methyl-d-aspartate and GABAB synaptic conductances in our model, but also to markedly increase the strength of excitatory synapses onto inhibitory neurons (>2-fold) in comparison to literature values, hinting at the importance to carefully adjust the effect of inhibition for achieving realistic dynamics in current network models. PMID:21106898

  17. High skill in low-frequency climate response through fluctuation dissipation theorems despite structural instability.

    PubMed

    Majda, Andrew J; Abramov, Rafail; Gershgorin, Boris

    2010-01-12

    Climate change science focuses on predicting the coarse-grained, planetary-scale, longtime changes in the climate system due to either changes in external forcing or internal variability, such as the impact of increased carbon dioxide. The predictions of climate change science are carried out through comprehensive, computational atmospheric, and oceanic simulation models, which necessarily parameterize physical features such as clouds, sea ice cover, etc. Recently, it has been suggested that there is irreducible imprecision in such climate models that manifests itself as structural instability in climate statistics and which can significantly hamper the skill of computer models for climate change. A systematic approach to deal with this irreducible imprecision is advocated through algorithms based on the Fluctuation Dissipation Theorem (FDT). There are important practical and computational advantages for climate change science when a skillful FDT algorithm is established. The FDT response operator can be utilized directly for multiple climate change scenarios, multiple changes in forcing, and other parameters, such as damping and inverse modelling directly without the need of running the complex climate model in each individual case. The high skill of FDT in predicting climate change, despite structural instability, is developed in an unambiguous fashion using mathematical theory as guidelines in three different test models: a generic class of analytical models mimicking the dynamical core of the computer climate models, reduced stochastic models for low-frequency variability, and models with a significant new type of irreducible imprecision involving many fast, unstable modes.

  18. Computation and brain processes, with special reference to neuroendocrine systems.

    PubMed

    Toni, Roberto; Spaletta, Giulia; Casa, Claudia Della; Ravera, Simone; Sandri, Giorgio

    2007-01-01

    The development of neural networks and brain automata has made neuroscientists aware that the performance limits of these brain-like devices lies, at least in part, in their computational power. The computational basis of a. standard cybernetic design, in fact, refers to that of a discrete and finite state machine or Turing Machine (TM). In contrast, it has been suggested that a number of human cerebral activites, from feedback controls up to mental processes, rely on a mixing of both finitary, digital-like and infinitary, continuous-like procedures. Therefore, the central nervous system (CNS) of man would exploit a form of computation going beyond that of a TM. This "non conventional" computation has been called hybrid computation. Some basic structures for hybrid brain computation are believed to be the brain computational maps, in which both Turing-like (digital) computation and continuous (analog) forms of calculus might occur. The cerebral cortex and brain stem appears primary candidate for this processing. However, also neuroendocrine structures like the hypothalamus are believed to exhibit hybrid computional processes, and might give rise to computational maps. Current theories on neural activity, including wiring and volume transmission, neuronal group selection and dynamic evolving models of brain automata, bring fuel to the existence of natural hybrid computation, stressing a cooperation between discrete and continuous forms of communication in the CNS. In addition, the recent advent of neuromorphic chips, like those to restore activity in damaged retina and visual cortex, suggests that assumption of a discrete-continuum polarity in designing biocompatible neural circuitries is crucial for their ensuing performance. In these bionic structures, in fact, a correspondence exists between the original anatomical architecture and synthetic wiring of the chip, resulting in a correspondence between natural and cybernetic neural activity. Thus, chip "form" provides a continuum essential to chip "function". We conclude that it is reasonable to predict the existence of hybrid computational processes in the course of many human, brain integrating activities, urging development of cybernetic approaches based on this modelling for adequate reproduction of a variety of cerebral performances.

  19. A Model of In vitro Plasticity at the Parallel Fiber—Molecular Layer Interneuron Synapses

    PubMed Central

    Lennon, William; Yamazaki, Tadashi; Hecht-Nielsen, Robert

    2015-01-01

    Theoretical and computational models of the cerebellum typically focus on the role of parallel fiber (PF)—Purkinje cell (PKJ) synapses for learned behavior, but few emphasize the role of the molecular layer interneurons (MLIs)—the stellate and basket cells. A number of recent experimental results suggest the role of MLIs is more important than previous models put forth. We investigate learning at PF—MLI synapses and propose a mathematical model to describe plasticity at this synapse. We perform computer simulations with this form of learning using a spiking neuron model of the MLI and show that it reproduces six in vitro experimental results in addition to simulating four novel protocols. Further, we show how this plasticity model can predict the results of other experimental protocols that are not simulated. Finally, we hypothesize what the biological mechanisms are for changes in synaptic efficacy that embody the phenomenological model proposed here. PMID:26733856

  20. Stream temperature investigations: field and analytic methods

    USGS Publications Warehouse

    Bartholow, J.M.

    1989-01-01

    Alternative public domain stream and reservoir temperature models are contrasted with SNTEMP. A distinction is made between steady-flow and dynamic-flow models and their respective capabilities. Regression models are offered as an alternative approach for some situations, with appropriate mathematical formulas suggested. Appendices provide information on State and Federal agencies that are good data sources, vendors for field instrumentation, and small computer programs useful in data reduction.

  1. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  2. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  3. Recognizing sights, smells, and sounds with gnostic fields.

    PubMed

    Kanan, Christopher

    2013-01-01

    Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of "gnostic" neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded.

  4. Recognizing Sights, Smells, and Sounds with Gnostic Fields

    PubMed Central

    Kanan, Christopher

    2013-01-01

    Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of “gnostic” neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded. PMID:23365648

  5. A new assessment method of pHEMT models by comparing relative errors of drain current and its derivatives up to the third order

    NASA Astrophysics Data System (ADS)

    Dobeš, Josef; Grábner, Martin; Puričer, Pavel; Vejražka, František; Míchal, Jan; Popp, Jakub

    2017-05-01

    Nowadays, there exist relatively precise pHEMT models available for computer-aided design, and they are frequently compared to each other. However, such comparisons are mostly based on absolute errors of drain-current equations and their derivatives. In the paper, a novel method is suggested based on relative root-mean-square errors of both drain current and its derivatives up to the third order. Moreover, the relative errors are subsequently relativized to the best model in each category to further clarify obtained accuracies of both drain current and its derivatives. Furthermore, one our older and two newly suggested models are also included in comparison with the traditionally precise Ahmed, TOM-2 and Materka ones. The assessment is performed using measured characteristics of a pHEMT operating up to 110 GHz. Finally, a usability of the proposed models including the higher-order derivatives is illustrated using s-parameters analysis and measurement at more operating points as well as computation and measurement of IP3 points of a low-noise amplifier of a multi-constellation satellite navigation receiver with ATF-54143 pHEMT.

  6. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    2003-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT).

  7. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  8. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  9. Comparison of Experimental Surface and Flow Field Measurements to Computational Results of the Juncture Flow Model

    NASA Technical Reports Server (NTRS)

    Roozeboom, Nettie H.; Lee, Henry C.; Simurda, Laura J.; Zilliac, Gregory G.; Pulliam, Thomas H.

    2016-01-01

    Wing-body juncture flow fields on commercial aircraft configurations are challenging to compute accurately. The NASA Advanced Air Vehicle Program's juncture flow committee is designing an experiment to provide data to improve Computational Fluid Dynamics (CFD) modeling in the juncture flow region. Preliminary design of the model was done using CFD, yet CFD tends to over-predict the separation in the juncture flow region. Risk reduction wind tunnel tests were requisitioned by the committee to obtain a better understanding of the flow characteristics of the designed models. NASA Ames Research Center's Fluid Mechanics Lab performed one of the risk reduction tests. The results of one case, accompanied by CFD simulations, are presented in this paper. Experimental results suggest the wall mounted wind tunnel model produces a thicker boundary layer on the fuselage than the CFD predictions, resulting in a larger wing horseshoe vortex suppressing the side of body separation in the juncture flow region. Compared to experimental results, CFD predicts a thinner boundary layer on the fuselage generates a weaker wing horseshoe vortex resulting in a larger side of body separation.

  10. Stellar model chromospheres. VIII - 70 Ophiuchi A /K0 V/ and Epsilon Eridani /K2 V/

    NASA Technical Reports Server (NTRS)

    Kelch, W. L.

    1978-01-01

    Model atmospheres for the late-type active-chromosphere dwarf stars 70 Oph A and Epsilon Eri are computed from high-resolution Ca II K line profiles as well as Mg II h and k line fluxes. A method is used which determines a plane-parallel homogeneous hydrostatic-equilibrium model of the upper photosphere and chromosphere which differs from theoretical models by lacking the constraint of radiative equilibrium (RE). The determinations of surface gravities, metallicities, and effective temperatures are discussed, and the computational methods, model atoms, atomic data, and observations are described. Temperature distributions for the two stars are plotted and compared with RE models for the adopted effective temperatures and gravities. The previously investigated T min/T eff vs. T eff relation is extended to Epsilon Eri and 70 Oph A, observed and computed Ca II K and Mg II h and k integrated emission fluxes are compared, and full tabulations are given for the proposed models. It is suggested that if less than half the observed Mg II flux for the two stars is lost in noise, the difference between an active-chromosphere star and a quiet-chromosphere star lies in the lower-chromospheric temperature gradient.

  11. A Computational Chemo-Fluidic Modeling for the Investigation of Patient-Specific Left Ventricle Thrombogenesis

    NASA Astrophysics Data System (ADS)

    Mittal, Rajat; Seo, Jung Hee; Abd, Thura; George, Richard T.

    2015-11-01

    Patients recovering from myocardial infarction (MI) are considered at high-risk for cardioembolic stroke due to the formation of left ventricle thrombus (LVT). The formation of LVT is the result of a complex interplay between the fluid dynamics inside the ventricle and the chemistry of coagulation, and the role of LV flow pattern on the thrombogenesis was not well understood. The previous computational study performed with the model ventricles suggested that the local flow residence time is the key variable governing the accumulation of coagulation factors. In the present study, a coupled, chemo-fluidic computational modeling is applied to the patient-specific cases of infracted ventricles to investigate the interaction between the LV hemodynamics and thrombogensis. In collaboration with the Johns Hopkins hospital, patient-specific LV models are constructed using the multi-modality medical imaging data. Blood flow in the left ventricle is simulated by solving the incompressible Navier-Stokes equations and the biochemical reactions for the thrombus formation are modeled with convection-diffusion-reaction equations. The formation and deposition of key coagulation chemical factors are then correlated with the hemodynamic flow metrics to explore the biophysics underlying LVT risk. Supported by the Johns Hopkins Medicine Discovery Fund and NSF Grant: CBET-1511200, Computational resource by XSEDE NSF grant TG-CTS100002.

  12. State-based versus reward-based motivation in younger and older adults.

    PubMed

    Worthy, Darrell A; Cooper, Jessica A; Byrne, Kaileigh A; Gorlick, Marissa A; Maddox, W Todd

    2014-12-01

    Recent decision-making work has focused on a distinction between a habitual, model-free neural system that is motivated toward actions that lead directly to reward and a more computationally demanding goal-directed, model-based system that is motivated toward actions that improve one's future state. In this article, we examine how aging affects motivation toward reward-based versus state-based decision making. Participants performed tasks in which one type of option provided larger immediate rewards but the alternative type of option led to larger rewards on future trials, or improvements in state. We predicted that older adults would show a reduced preference for choices that led to improvements in state and a greater preference for choices that maximized immediate reward. We also predicted that fits from a hybrid reinforcement-learning model would indicate greater model-based strategy use in younger than in older adults. In line with these predictions, older adults selected the options that maximized reward more often than did younger adults in three of the four tasks, and modeling results suggested reduced model-based strategy use. In the task where older adults showed similar behavior to younger adults, our model-fitting results suggested that this was due to the utilization of a win-stay-lose-shift heuristic rather than a more complex model-based strategy. Additionally, within older adults, we found that model-based strategy use was positively correlated with memory measures from our neuropsychological test battery. We suggest that this shift from state-based to reward-based motivation may be due to age related declines in the neural structures needed for more computationally demanding model-based decision making.

  13. Comparison of Groundwater Level Models Based on Artificial Neural Networks and ANFIS

    PubMed Central

    Domazet, Milka; Stricevic, Ruzica; Pocuca, Vesna; Spalevic, Velibor; Pivic, Radmila; Gregoric, Enika; Domazet, Uros

    2015-01-01

    Water table forecasting plays an important role in the management of groundwater resources in agricultural regions where there are drainage systems in river valleys. The results presented in this paper pertain to an area along the left bank of the Danube River, in the Province of Vojvodina, which is the northern part of Serbia. Two soft computing techniques were used in this research: an adaptive neurofuzzy inference system (ANFIS) and an artificial neural network (ANN) model for one-month water table forecasts at several wells located at different distances from the river. The results suggest that both these techniques represent useful tools for modeling hydrological processes in agriculture, with similar computing and memory capabilities, such that they constitute an exceptionally good numerical framework for generating high-quality models. PMID:26759830

  14. Comparison of Groundwater Level Models Based on Artificial Neural Networks and ANFIS.

    PubMed

    Djurovic, Nevenka; Domazet, Milka; Stricevic, Ruzica; Pocuca, Vesna; Spalevic, Velibor; Pivic, Radmila; Gregoric, Enika; Domazet, Uros

    2015-01-01

    Water table forecasting plays an important role in the management of groundwater resources in agricultural regions where there are drainage systems in river valleys. The results presented in this paper pertain to an area along the left bank of the Danube River, in the Province of Vojvodina, which is the northern part of Serbia. Two soft computing techniques were used in this research: an adaptive neurofuzzy inference system (ANFIS) and an artificial neural network (ANN) model for one-month water table forecasts at several wells located at different distances from the river. The results suggest that both these techniques represent useful tools for modeling hydrological processes in agriculture, with similar computing and memory capabilities, such that they constitute an exceptionally good numerical framework for generating high-quality models.

  15. A computer model for the recombination zone of a microwave-plasma electrothermal rocket

    NASA Technical Reports Server (NTRS)

    Filpus, John W.; Hawley, Martin C.

    1987-01-01

    As part of a study of the microwave-plasma electrothermal rocket, a computer model of the flow regime below the plasma has been developed. A second-order model, including axial dispersion of energy and material and boundary conditions at infinite length, was developed to partially reproduce the absence of mass-flow rate dependence that was seen in experimental temperature profiles. To solve the equations of the model, a search technique was developed to find the initial derivatives. On integrating with a trial set of initial derivatives, the values and their derivatives were checked to judge whether the values were likely to attain values outside the practical regime, and hence, the boundary conditions at infinity were likely to be violated. Results are presented and directions for further development are suggested.

  16. A Preliminary Validation of Attention, Relevance, Confidence and Satisfaction Model-Based Instructional Material Motivational Survey in a Computer-Based Tutorial Setting

    ERIC Educational Resources Information Center

    Huang, Wenhao; Huang, Wenyeh; Diefes-Dux, Heidi; Imbrie, Peter K.

    2006-01-01

    This paper describes a preliminary validation study of the Instructional Material Motivational Survey (IMMS) derived from the Attention, Relevance, Confidence and Satisfaction motivational design model. Previous studies related to the IMMS, however, suggest its practical application for motivational evaluation in various instructional settings…

  17. Core Binding Site of a Thioflavin-T-Derived Imaging Probe on Amyloid β Fibrils Predicted by Computational Methods.

    PubMed

    Kawai, Ryoko; Araki, Mitsugu; Yoshimura, Masashi; Kamiya, Narutoshi; Ono, Masahiro; Saji, Hideo; Okuno, Yasushi

    2018-05-16

    Development of new diagnostic imaging probes for Alzheimer's disease, such as positron emission tomography (PET) and single photon emission computed tomography (SPECT) probes, has been strongly desired. In this study, we investigated the most accessible amyloid β (Aβ) binding site of [ 123 I]IMPY, a Thioflavin-T-derived SPECT probe, using experimental and computational methods. First, we performed a competitive inhibition assay with Orange-G, which recognizes the KLVFFA region in Aβ fibrils, suggesting that IMPY and Orange-G bind to different sites in Aβ fibrils. Next, we precisely predicted the IMPY binding site on a multiple-protofilament Aβ fibril model using computational approaches, consisting of molecular dynamics and docking simulations. We generated possible IMPY-binding structures using docking simulations to identify candidates for probe-binding sites. The binding free energy of IMPY with the Aβ fibril was calculated by a free energy simulation method, MP-CAFEE. These computational results suggest that IMPY preferentially binds to an interfacial pocket located between two protofilaments and is stabilized mainly through hydrophobic interactions. Finally, our computational approach was validated by comparing it with the experimental results. The present study demonstrates the possibility of computational approaches to screen new PET/SPECT probes for Aβ imaging.

  18. Does Cation Size Affect Occupancy and Electrostatic Screening of the Nucleic Acid Ion Atmosphere?

    PubMed Central

    2016-01-01

    Electrostatics are central to all aspects of nucleic acid behavior, including their folding, condensation, and binding to other molecules, and the energetics of these processes are profoundly influenced by the ion atmosphere that surrounds nucleic acids. Given the highly complex and dynamic nature of the ion atmosphere, understanding its properties and effects will require synergy between computational modeling and experiment. Prior computational models and experiments suggest that cation occupancy in the ion atmosphere depends on the size of the cation. However, the computational models have not been independently tested, and the experimentally observed effects were small. Here, we evaluate a computational model of ion size effects by experimentally testing a blind prediction made from that model, and we present additional experimental results that extend our understanding of the ion atmosphere. Giambasu et al. developed and implemented a three-dimensional reference interaction site (3D-RISM) model for monovalent cations surrounding DNA and RNA helices, and this model predicts that Na+ would outcompete Cs+ by 1.8–2.1-fold; i.e., with Cs+ in 2-fold excess of Na+ the ion atmosphere would contain an equal number of each cation (Nucleic Acids Res.2015, 43, 8405). However, our ion counting experiments indicate that there is no significant preference for Na+ over Cs+. There is an ∼25% preferential occupancy of Li+ over larger cations in the ion atmosphere but, counter to general expectations from existing models, no size dependence for the other alkali metal ions. Further, we followed the folding of the P4–P6 RNA and showed that differences in folding with different alkali metal ions observed at high concentration arise from cation–anion interactions and not cation size effects. Overall, our results provide a critical test of a computational prediction, fundamental information about ion atmosphere properties, and parameters that will aid in the development of next-generation nucleic acid computational models. PMID:27479701

  19. Understanding Lymphatic Valve Function via Computational Modeling

    NASA Astrophysics Data System (ADS)

    Wolf, Ki; Nepiyushchikh, Zhanna; Razavi, Mohammad; Dixon, Brandon; Alexeev, Alexander

    2017-11-01

    The lymphatic system is a crucial part to the circulatory system with many important functions, such as transport of interstitial fluid, fatty acid, and immune cells. Lymphatic vessels' contractile walls and valves allow lymph flow against adverse pressure gradients and prevent back flow. Yet, the effect of lymphatic valves' geometric and mechanical properties to pumping performance and lymphatic dysfunctions like lymphedema is not well understood. Our coupled fluid-solid computational model based on lattice Boltzmann model and lattice spring model investigates the dynamics and effectiveness of lymphatic valves in resistance minimization, backflow prevention, and viscoelastic response under different geometric and mechanical properties, suggesting the range of lymphatic valve parameters with effective pumping performance. Our model also provides more physiologically relevant relations of the valve response under varied conditions to a lumped parameter model of the lymphatic system giving an integrative insight into lymphatic system performance, including its failure due to diseases. NSF CMMI-1635133.

  20. A computational model of spatial visualization capacity.

    PubMed

    Lyon, Don R; Gunzelmann, Glenn; Gluck, Kevin A

    2008-09-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R's declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.

  1. Model selection and parameter estimation in structural dynamics using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith

    2018-01-01

    This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.

  2. Cost-Benefit Arbitration Between Multiple Reinforcement-Learning Systems.

    PubMed

    Kool, Wouter; Gershman, Samuel J; Cushman, Fiery A

    2017-09-01

    Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis.

  3. The effectiveness of element downsizing on a three-dimensional finite element model of bone trabeculae in implant biomechanics.

    PubMed

    Sato, Y; Wadamoto, M; Tsuga, K; Teixeira, E R

    1999-04-01

    More validity of finite element analysis in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To investigate the effectiveness of element downsizing on the construction of a three-dimensional finite element bone trabeculae model, with different element sizes (600, 300, 150 and 75 microm) models were constructed and stress induced by vertical 10 N loading was analysed. The difference in von Mises stress values between the models with 600 and 300 microm element sizes was larger than that between 300 and 150 microm. On the other hand, no clear difference of stress values was detected among the models with 300, 150 and 75 microm element sizes. Downsizing of elements from 600 to 300 microm is suggested to be effective in the construction of a three-dimensional finite element bone trabeculae model for possible saving of computer memory and calculation time in the laboratory.

  4. Modelling intelligent behavior

    NASA Technical Reports Server (NTRS)

    Green, H. S.; Triffet, T.

    1993-01-01

    An introductory discussion of the related concepts of intelligence and consciousness suggests criteria to be met in the modeling of intelligence and the development of intelligent materials. Methods for the modeling of actual structure and activity of the animal cortex have been found, based on present knowledge of the ionic and cellular constitution of the nervous system. These have led to the development of a realistic neural network model, which has been used to study the formation of memory and the process of learning. An account is given of experiments with simple materials which exhibit almost all properties of biological synapses and suggest the possibility of a new type of computer architecture to implement an advanced type of artificial intelligence.

  5. Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation

    PubMed Central

    Khaligh-Razavi, Seyed-Mahdi; Kriegeskorte, Nikolaus

    2014-01-01

    Inferior temporal (IT) cortex in human and nonhuman primates serves visual object recognition. Computational object-vision models, although continually improving, do not yet reach human performance. It is unclear to what extent the internal representations of computational models can explain the IT representation. Here we investigate a wide range of computational model representations (37 in total), testing their categorization performance and their ability to account for the IT representational geometry. The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities. Representational geometries were significantly correlated between IT and many of the models. However, the categorical clustering observed in IT was largely unexplained by the unsupervised models. The deep convolutional network, which was trained by supervision with over a million category-labeled images, reached the highest categorization performance and also best explained IT, although it did not fully explain the IT data. Combining the features of this model with appropriate weights and adding linear combinations that maximize the margin between animate and inanimate objects and between faces and other objects yielded a representation that fully explained our IT data. Overall, our results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT. PMID:25375136

  6. Differences in simulated fire spread over Askervein Hill using two advanced wind models and a traditional uniform wind field

    Treesearch

    Jason Forthofer; Bret Butler

    2007-01-01

    A computational fluid dynamics (CFD) model and a mass-consistent model were used to simulate winds on simulated fire spread over a simple, low hill. The results suggest that the CFD wind field could significantly change simulated fire spread compared to traditional uniform winds. The CFD fire spread case may match reality better because the winds used in the fire...

  7. Processing of Visual Imagery by an Adaptive Model of the Visual System: Its Performance and its Significance. Final Report, June 1969-March 1970.

    ERIC Educational Resources Information Center

    Tallman, Oliver H.

    A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…

  8. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  9. Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2017-01-01

    Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.

  10. Progress towards an effective model for FeSe from high-accuracy first-principles quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Busemeyer, Brian; Wagner, Lucas K.

    While the origin of superconductivity in the iron-based materials is still controversial, the proximity of the superconductivity to magnetic order is suggestive that magnetism may be important. Our previous work has suggested that first-principles Diffusion Monte Carlo (FN-DMC) can capture magnetic properties of iron-based superconductors that density functional theory (DFT) misses, but which are consistent with experiment. We report on the progress of efforts to find simple effective models consistent with the FN-DMC description of the low-lying Hilbert space of the iron-based superconductor, FeSe. We utilize a procedure outlined by Changlani et al.[1], which both produces parameter values and indications of whether the model is a good description of the first-principles Hamiltonian. Using this procedure, we evaluate several models of the magnetic part of the Hilbert space found in the literature, as well as the Hubbard model, and a spin-fermion model. We discuss which interaction parameters are important for this material, and how the material-specific properties give rise to these interactions. U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award No. FG02-12ER46875, as well as the NSF Graduate Research Fellowship Program.

  11. Expert models and modeling processes associated with a computer-modeling tool

    NASA Astrophysics Data System (ADS)

    Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-07-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.

  12. Emotions are emergent processes: they require a dynamic computational architecture

    PubMed Central

    Scherer, Klaus R.

    2009-01-01

    Emotion is a cultural and psychobiological adaptation mechanism which allows each individual to react flexibly and dynamically to environmental contingencies. From this claim flows a description of the elements theoretically needed to construct a virtual agent with the ability to display human-like emotions and to respond appropriately to human emotional expression. This article offers a brief survey of the desirable features of emotion theories that make them ideal blueprints for agent models. In particular, the component process model of emotion is described, a theory which postulates emotion-antecedent appraisal on different levels of processing that drive response system patterning predictions. In conclusion, investing seriously in emergent computational modelling of emotion using a nonlinear dynamic systems approach is suggested. PMID:19884141

  13. Clinical Pilot Study and Computational Modeling of Bitemporal Transcranial Direct Current Stimulation, and Safety of Repeated Courses of Treatment, in Major Depression.

    PubMed

    Ho, Kerrie-Anne; Bai, Siwei; Martin, Donel; Alonzo, Angelo; Dokos, Socrates; Loo, Colleen K

    2015-12-01

    This study aimed to examine a bitemporal (BT) transcranial direct current stimulation (tDCS) electrode montage for the treatment of depression through a clinical pilot study and computational modeling. The safety of repeated courses of stimulation was also examined. Four participants with depression who had previously received multiple courses of tDCS received a 4-week course of BT tDCS. Mood and neuropsychological function were assessed. The results were compared with previous courses of tDCS given to the same participants using different electrode montages. Computational modeling examined the electric field maps produced by the different montages. Three participants showed clinical improvement with BT tDCS (mean [SD] improvement, 49.6% [33.7%]). There were no adverse neuropsychological effects. Computational modeling showed that the BT montage activates the anterior cingulate cortices and brainstem, which are deep brain regions that are important for depression. However, a fronto-extracephalic montage stimulated these areas more effectively. No adverse effects were found in participants receiving up to 6 courses of tDCS. Bitemporal tDCS was safe and led to clinically meaningful efficacy in 3 of 4 participants. However, computational modeling suggests that the BT montage may not activate key brain regions in depression more effectively than another novel montage--fronto-extracephalic tDCS. There is also preliminary evidence to support the safety of up to 6 repeated courses of tDCS.

  14. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  15. Hierarchical Bayesian spatial models for predicting multiple forest variables using waveform LiDAR, hyperspectral imagery, and large inventory datasets

    USGS Publications Warehouse

    Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.

    2013-01-01

    In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mezei, Márk; Stanford, Douglas

    We discuss the time dependence of subsystem entropies in interacting quantum systems. As a model for the time dependence, we suggest that the entropy is as large as possible given two constraints: one follows from the existence of an emergent light cone, and the other is a conjecture associated to the ''entanglement velocity'' v E. We compare this model to new holographic and spin chain computations, and to an operator growth picture. Finally, we introduce a second way of computing the emergent light cone speed in holographic theories that provides a boundary dynamics explanation for a special case of entanglementmore » wedge subregion duality in AdS/CFT.« less

  17. Large-scale scour of the sea floor and the effect of natural armouring processes, land reclamation Maasvlakte 2, port of Rotterdam

    USGS Publications Warehouse

    Boer, S.; Elias, E.; Aarninkhof, S.; Roelvink, D.; Vellinga, T.

    2007-01-01

    Morphological model computations based on uniform (non-graded) sediment revealed an unrealistically strong scour of the sea floor in the immediate vicinity to the west of Maasvlakte 2. By means of a state-of-the-art graded sediment transport model the effect of natural armouring and sorting of bed material on the scour process has been examined. Sensitivity computations confirm that the development of the scour hole is strongly reduced due to the incorporation of armouring processes, suggesting an approximately 30% decrease in terms of erosion area below the -20m depth contour. ?? 2007 ASCE.

  18. On entanglement spreading in chaotic systems

    DOE PAGES

    Mezei, Márk; Stanford, Douglas

    2017-05-11

    We discuss the time dependence of subsystem entropies in interacting quantum systems. As a model for the time dependence, we suggest that the entropy is as large as possible given two constraints: one follows from the existence of an emergent light cone, and the other is a conjecture associated to the ''entanglement velocity'' v E. We compare this model to new holographic and spin chain computations, and to an operator growth picture. Finally, we introduce a second way of computing the emergent light cone speed in holographic theories that provides a boundary dynamics explanation for a special case of entanglementmore » wedge subregion duality in AdS/CFT.« less

  19. Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation

    PubMed Central

    Tang, Liang; Cheng, Pengle

    2017-01-01

    Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic–plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis. PMID:28746390

  20. Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation.

    PubMed

    Tang, Liang; Zhang, Jinjie; Cheng, Pengle

    2017-01-01

    Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic-plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis.

  1. Evaluating Computer Screen Time and Its Possible Link to Psychopathology in the Context of Age: A Cross-Sectional Study of Parents and Children

    PubMed Central

    Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval

    2015-01-01

    Background Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. Methods In a cross-sectional study, 185 parents and children aged 3–18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. Results After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23–8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07–2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99–1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. Conclusions The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use. PMID:26536037

  2. Evaluating Computer Screen Time and Its Possible Link to Psychopathology in the Context of Age: A Cross-Sectional Study of Parents and Children.

    PubMed

    Segev, Aviv; Mimouni-Bloch, Aviva; Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval

    2015-01-01

    Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. In a cross-sectional study, 185 parents and children aged 3-18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23-8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07-2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99-1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use.

  3. The Complexity of Biomechanics Causing Primary Blast-Induced Traumatic Brain Injury: A Review of Potential Mechanisms

    PubMed Central

    Courtney, Amy; Courtney, Michael

    2015-01-01

    Primary blast-induced traumatic brain injury (bTBI) is a prevalent battlefield injury in recent conflicts, yet biomechanical mechanisms of bTBI remain unclear. Elucidating specific biomechanical mechanisms is essential to developing animal models for testing candidate therapies and for improving protective equipment. Three hypothetical mechanisms of primary bTBI have received the most attention. Because translational and rotational head accelerations are primary contributors to TBI from non-penetrating blunt force head trauma, the acceleration hypothesis suggests that blast-induced head accelerations may cause bTBI. The hypothesis of direct cranial transmission suggests that a pressure transient traverses the skull into the brain and directly injures brain tissue. The thoracic hypothesis of bTBI suggests that some combination of a pressure transient reaching the brain via the thorax and a vagally mediated reflex result in bTBI. These three mechanisms may not be mutually exclusive, and quantifying exposure thresholds (for blasts of a given duration) is essential for determining which mechanisms may be contributing for a level of blast exposure. Progress has been hindered by experimental designs, which do not effectively expose animal models to a single mechanism and by over-reliance on poorly validated computational models. The path forward should be predictive validation of computational models by quantitative confirmation with blast experiments in animal models, human cadavers, and biofidelic human surrogates over a range of relevant blast magnitudes and durations coupled with experimental designs, which isolate a single injury mechanism. PMID:26539158

  4. Upper extremity pain and computer use among engineering graduate students.

    PubMed

    Schlossberg, Eric B; Morrow, Sandra; Llosa, Augusto E; Mamary, Edward; Dietrich, Peter; Rempel, David M

    2004-09-01

    The objective of this study was to investigate risk factors associated with persistent or recurrent upper extremity and neck pain among engineering graduate students. A random sample of 206 Electrical Engineering and Computer Science (EECS) graduate students at a large public university completed an online questionnaire. Approximately 60% of respondents reported upper extremity or neck pain attributed to computer use and reported a mean pain severity score of 4.5 (+/-2.2; scale 0-10). In a final logistic regression model, female gender, years of computer use, and hours of computer use per week were significantly associated with pain. The high prevalence of upper extremity pain reported by graduate students suggests a public health need to identify interventions that will reduce symptom severity and prevent impairment.

  5. A computable expression of closure to efficient causation.

    PubMed

    Mossio, Matteo; Longo, Giuseppe; Stewart, John

    2009-04-07

    In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.

  6. CMOL/CMOS hardware architectures and performance/price for Bayesian memory - The building block of intelligent systems

    NASA Astrophysics Data System (ADS)

    Zaveri, Mazad Shaheriar

    The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC implementation. We later use this methodology to investigate the hardware implementations of cortex-scale spiking neural system, which is an approximate neural equivalent of BICM based cortex-scale system. The results of this investigation also suggest that CMOL is a promising candidate to implement such large-scale neuromorphic systems. In general, the assessment of such hypothetical baseline hardware architectures provides the prospects for building large-scale (mammalian cortex-scale) implementations of neuromorphic/Bayesian/intelligent systems using state-of-the-art and beyond state-of-the-art silicon structures.

  7. Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants

    NASA Technical Reports Server (NTRS)

    Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.

    1996-01-01

    Progress and results in the development of an integrated air quality modeling, monitoring, fault detection, and isolation system are presented. The focus was on development of distributed models of the air contaminants transport, the study of air quality monitoring techniques based on the model of transport process and on-line contaminant concentration measurements, and sensor placement. Different approaches to the modeling of spacecraft air contamination are discussed, and a three-dimensional distributed parameter air contaminant dispersion model applicable to both laminar and turbulent transport is proposed. A two-dimensional approximation of a full scale transport model is also proposed based on the spatial averaging of the three dimensional model over the least important space coordinate. A computer implementation of the transport model is considered and a detailed development of two- and three-dimensional models illustrated by contaminant transport simulation results is presented. The use of a well established Kalman filtering approach is suggested as a method for generating on-line contaminant concentration estimates based on both real time measurements and the model of contaminant transport process. It is shown that high computational requirements of the traditional Kalman filter can render difficult its real-time implementation for high-dimensional transport model and a novel implicit Kalman filtering algorithm is proposed which is shown to lead to an order of magnitude faster computer implementation in the case of air quality monitoring.

  8. Hollow cathodes as electron emitting plasma contactors Theory and computer modeling

    NASA Technical Reports Server (NTRS)

    Davis, V. A.; Katz, I.; Mandell, M. J.; Parks, D. E.

    1987-01-01

    Several researchers have suggested using hollow cathodes as plasma contactors for electrodynamic tethers, particularly to prevent the Shuttle Orbiter from charging to large negative potentials. Previous studies have shown that fluid models with anomalous scattering can describe the electron transport in hollow cathode generated plasmas. An improved theory of the hollow cathode plasmas is developed and computational results using the theory are compared with laboratory experiments. Numerical predictions for a hollow cathode plasma source of the type considered for use on the Shuttle are presented, as are three-dimensional NASCAP/LEO calculations of the emitted ion trajectories and the resulting potentials in the vicinity of the Orbiter. The computer calculations show that the hollow cathode plasma source makes vastly superior contact with the ionospheric plasma compared with either an electron gun or passive ion collection by the Orbiter.

  9. Modeling listeners' emotional response to music.

    PubMed

    Eerola, Tuomas

    2012-10-01

    An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.

  10. A Computational Model for Path Loss in Wireless Sensor Networks in Orchard Environments

    PubMed Central

    Anastassiu, Hristos T.; Vougioukas, Stavros; Fronimos, Theodoros; Regen, Christian; Petrou, Loukas; Zude, Manuela; Käthner, Jana

    2014-01-01

    A computational model for radio wave propagation through tree orchards is presented. Trees are modeled as collections of branches, geometrically approximated by cylinders, whose dimensions are determined on the basis of measurements in a cherry orchard. Tree canopies are modeled as dielectric spheres of appropriate size. A single row of trees was modeled by creating copies of a representative tree model positioned on top of a rectangular, lossy dielectric slab that simulated the ground. The complete scattering model, including soil and trees, enhanced by periodicity conditions corresponding to the array, was characterized via a commercial computational software tool for simulating the wave propagation by means of the Finite Element Method. The attenuation of the simulated signal was compared to measurements taken in the cherry orchard, using two ZigBee receiver-transmitter modules. Near the top of the tree canopies (at 3 m), the predicted attenuation was close to the measured one—just slightly underestimated. However, at 1.5 m the solver underestimated the measured attenuation significantly, especially when leaves were present and, as distances grew longer. This suggests that the effects of scattering from neighboring tree rows need to be incorporated into the model. However, complex geometries result in ill conditioned linear systems that affect the solver's convergence. PMID:24625738

  11. Deformations of thick two-material cylinder under axially varying radial pressure

    NASA Technical Reports Server (NTRS)

    Patel, Y. A.

    1976-01-01

    Stresses and deformations in thick, short, composite cylinder subjected to axially varying radial pressure are studied. Effect of slippage at the interface is examined. In the NASTRAN finite element model, multipoint constraint feature is utilized. Results are compared with theoretical analysis and SAP-IV computer code. Results from NASTRAN computer code are in good agreement with the analytical solutions. Results suggest a considerable influence of interfacial slippage on the axial bending stresses in the cylinder.

  12. A Framework for Modeling Competitive and Cooperative Computation in Retinal Processing

    NASA Astrophysics Data System (ADS)

    Moreno-Díaz, Roberto; de Blasio, Gabriel; Moreno-Díaz, Arminda

    2008-07-01

    The structure of the retina suggests that it should be treated (at least from the computational point of view), as a layered computer. Different retinal cells contribute to the coding of the signals down to ganglion cells. Also, because of the nature of the specialization of some ganglion cells, the structure suggests that all these specialization processes should take place at the inner plexiform layer and they should be of a local character, prior to a global integration and frequency-spike coding by the ganglion cells. The framework we propose consists of a layered computational structure, where outer layers provide essentially with band-pass space-time filtered signals which are progressively delayed, at least for their formal treatment. Specialization is supposed to take place at the inner plexiform layer by the action of spatio-temporal microkernels (acting very locally), and having a centerperiphery space-time structure. The resulting signals are then integrated by the ganglion cells through macrokernels structures. Practically all types of specialization found in different vertebrate retinas, as well as the quasilinear behavior in some higher vertebrates, can be modeled and simulated within this framework. Finally, possible feedback from central structures is considered. Though their relevance to retinal processing is not definitive, it is included here for the sake of completeness, since it is a formal requisite for recursiveness.

  13. The neural dynamics of song syntax in songbirds

    NASA Astrophysics Data System (ADS)

    Jin, Dezhe

    2010-03-01

    Songbird is ``the hydrogen atom'' of the neuroscience of complex, learned vocalizations such as human speech. Songs of Bengalese finch consist of sequences of syllables. While syllables are temporally stereotypical, syllable sequences can vary and follow complex, probabilistic syntactic rules, which are rudimentarily similar to grammars in human language. Songbird brain is accessible to experimental probes, and is understood well enough to construct biologically constrained, predictive computational models. In this talk, I will discuss the structure and dynamics of neural networks underlying the stereotypy of the birdsong syllables and the flexibility of syllable sequences. Recent experiments and computational models suggest that a syllable is encoded in a chain network of projection neurons in premotor nucleus HVC (proper name). Precisely timed spikes propagate along the chain, driving vocalization of the syllable through downstream nuclei. Through a computational model, I show that that variable syllable sequences can be generated through spike propagations in a network in HVC in which the syllable-encoding chain networks are connected into a branching chain pattern. The neurons mutually inhibit each other through the inhibitory HVC interneurons, and are driven by external inputs from nuclei upstream of HVC. At a branching point that connects the final group of a chain to the first groups of several chains, the spike activity selects one branch to continue the propagation. The selection is probabilistic, and is due to the winner-take-all mechanism mediated by the inhibition and noise. The model predicts that the syllable sequences statistically follow partially observable Markov models. Experimental results supporting this and other predictions of the model will be presented. We suggest that the syntax of birdsong syllable sequences is embedded in the connection patterns of HVC projection neurons.

  14. Delayed and lasting effects of deep brain stimulation on locomotion in Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Beuter, Anne; Modolo, Julien

    2009-06-01

    Parkinson's disease (PD) is a neurodegenerative disorder characterized by a variety of motor signs affecting gait, postural stability, and tremor. These symptoms can be improved when electrodes are implanted in deep brain structures and electrical stimulation is delivered chronically at high frequency (>100 Hz). Deep brain stimulation (DBS) onset or cessation affects PD signs with different latencies, and the long-term improvements of symptoms affecting the body axis and those affecting the limbs vary in duration. Interestingly, these effects have not been systematically analyzed and modeled. We compare these timing phenomena in relation to one axial (i.e., locomotion) and one distal (i.e., tremor) signs. We suggest that during DBS, these symptoms are improved by different network mechanisms operating at multiple time scales. Locomotion improvement may involve a delayed plastic reorganization, which takes hours to develop, whereas rest tremor is probably alleviated by an almost instantaneous desynchronization of neural activity in subcortical structures. Even if all PD patients develop both distal and axial symptoms sooner or later, current computational models of locomotion and rest tremor are separate. Furthermore, a few computational models of locomotion focus on PD and none exploring the effect of DBS was found in the literature. We, therefore, discuss a model of a neuronal network during DBS, general enough to explore the subcircuits controlling locomotion and rest tremor simultaneously. This model accounts for synchronization and plasticity, two mechanisms that are believed to underlie the two types of symptoms analyzed. We suggest that a hysteretic effect caused by DBS-induced plasticity and synchronization modulation contributes to the different therapeutic latencies observed. Such a comprehensive, generic computational model of DBS effects, incorporating these timing phenomena, should assist in developing a more efficient, faster, durable treatment of distal and axial signs in PD.

  15. Psychopathy-related traits and the use of reward and social information: a computational approach

    PubMed Central

    Brazil, Inti A.; Hunt, Laurence T.; Bulten, Berend H.; Kessels, Roy P. C.; de Bruijn, Ellen R. A.; Mars, Rogier B.

    2013-01-01

    Psychopathy is often linked to disturbed reinforcement-guided adaptation of behavior in both clinical and non-clinical populations. Recent work suggests that these disturbances might be due to a deficit in actively using information to guide changes in behavior. However, how much information is actually used to guide behavior is difficult to observe directly. Therefore, we used a computational model to estimate the use of information during learning. Thirty-six female subjects were recruited based on their total scores on the Psychopathic Personality Inventory (PPI), a self-report psychopathy list, and performed a task involving simultaneous learning of reward-based and social information. A Bayesian reinforcement-learning model was used to parameterize the use of each source of information during learning. Subsequently, we used the subscales of the PPI to assess psychopathy-related traits, and the traits that were strongly related to the model's parameters were isolated through a formal variable selection procedure. Finally, we assessed how these covaried with model parameters. We succeeded in isolating key personality traits believed to be relevant for psychopathy that can be related to model-based descriptions of subject behavior. Use of reward-history information was negatively related to levels of trait anxiety and fearlessness, whereas use of social advice decreased as the perceived ability to manipulate others and lack of anxiety increased. These results corroborate previous findings suggesting that sub-optimal use of different types of information might be implicated in psychopathy. They also further highlight the importance of considering the potential of computational modeling to understand the role of latent variables, such as the weight people give to various sources of information during goal-directed behavior, when conducting research on psychopathy-related traits and in the field of forensic psychiatry. PMID:24391615

  16. General Education Courses at the University of Botswana: Application of the Theory of Reasoned Action in Measuring Course Outcomes

    ERIC Educational Resources Information Center

    Garg, Deepti; Garg, Ajay K.

    2007-01-01

    This study applied the Theory of Reasoned Action and the Technology Acceptance Model to measure outcomes of general education courses (GECs) under the University of Botswana Computer and Information Skills (CIS) program. An exploratory model was validated for responses from 298 students. The results suggest that resources currently committed to…

  17. Sensitivity analysis of a pulse nutrient addition technique for estimating nutrient uptake in large streams

    Treesearch

    Laurence Lin; J.R. Webster

    2012-01-01

    The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...

  18. Studio Mathematics: The Epistemology and Practice of Design Pedagogy as a Model for Mathematics Learning. WCER Working Paper No. 2005-3

    ERIC Educational Resources Information Center

    Shaffer, David Williamson

    2005-01-01

    This paper examines how middle school students developed understanding of transformational geometry through design activities in Escher's World, a computationally rich design experiment explicitly modeled on an architectural design studio. Escher's World was based on the theory of pedagogical praxis (Shaffer, 2004a), which suggests that preserving…

  19. Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means

    Treesearch

    W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren

    1997-01-01

    Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...

  20. Using Incremental Rehearsal to Increase Fluency of Single-Digit Multiplication Facts with Children Identified as Learning Disabled in Mathematics Computation

    ERIC Educational Resources Information Center

    Burns, Matthew K.

    2005-01-01

    Previous research suggested that Incremental Rehearsal (IR; Tucker, 1989) led to better retention than other drill practices models. However, little research exists in the literature regarding drill models for mathematics and no studies were found that used IR to practice multiplication facts. Therefore, the current study used IR as an…

  1. Computational Analysis of AMPK-Mediated Neuroprotection Suggests Acute Excitotoxic Bioenergetics and Glucose Dynamics Are Regulated by a Minimal Set of Critical Reactions.

    PubMed

    Connolly, Niamh M C; D'Orsi, Beatrice; Monsefi, Naser; Huber, Heinrich J; Prehn, Jochen H M

    2016-01-01

    Loss of ionic homeostasis during excitotoxic stress depletes ATP levels and activates the AMP-activated protein kinase (AMPK), re-establishing energy production by increased expression of glucose transporters on the plasma membrane. Here, we develop a computational model to test whether this AMPK-mediated glucose import can rapidly restore ATP levels following a transient excitotoxic insult. We demonstrate that a highly compact model, comprising a minimal set of critical reactions, can closely resemble the rapid dynamics and cell-to-cell heterogeneity of ATP levels and AMPK activity, as confirmed by single-cell fluorescence microscopy in rat primary cerebellar neurons exposed to glutamate excitotoxicity. The model further correctly predicted an excitotoxicity-induced elevation of intracellular glucose, and well resembled the delayed recovery and cell-to-cell heterogeneity of experimentally measured glucose dynamics. The model also predicted necrotic bioenergetic collapse and altered calcium dynamics following more severe excitotoxic insults. In conclusion, our data suggest that a minimal set of critical reactions may determine the acute bioenergetic response to transient excitotoxicity and that an AMPK-mediated increase in intracellular glucose may be sufficient to rapidly recover ATP levels following an excitotoxic insult.

  2. Computational Analysis of AMPK-Mediated Neuroprotection Suggests Acute Excitotoxic Bioenergetics and Glucose Dynamics Are Regulated by a Minimal Set of Critical Reactions

    PubMed Central

    Connolly, Niamh M. C.; D’Orsi, Beatrice; Monsefi, Naser; Huber, Heinrich J.; Prehn, Jochen H. M.

    2016-01-01

    Loss of ionic homeostasis during excitotoxic stress depletes ATP levels and activates the AMP-activated protein kinase (AMPK), re-establishing energy production by increased expression of glucose transporters on the plasma membrane. Here, we develop a computational model to test whether this AMPK-mediated glucose import can rapidly restore ATP levels following a transient excitotoxic insult. We demonstrate that a highly compact model, comprising a minimal set of critical reactions, can closely resemble the rapid dynamics and cell-to-cell heterogeneity of ATP levels and AMPK activity, as confirmed by single-cell fluorescence microscopy in rat primary cerebellar neurons exposed to glutamate excitotoxicity. The model further correctly predicted an excitotoxicity-induced elevation of intracellular glucose, and well resembled the delayed recovery and cell-to-cell heterogeneity of experimentally measured glucose dynamics. The model also predicted necrotic bioenergetic collapse and altered calcium dynamics following more severe excitotoxic insults. In conclusion, our data suggest that a minimal set of critical reactions may determine the acute bioenergetic response to transient excitotoxicity and that an AMPK-mediated increase in intracellular glucose may be sufficient to rapidly recover ATP levels following an excitotoxic insult. PMID:26840769

  3. Assessment Of Coronary Artery Aneurysms Using Transluminal Attenuation Gradient And Computational Modeling In Kawasaki Disease Patients

    NASA Astrophysics Data System (ADS)

    Grande Gutierrez, Noelia; Kahn, Andrew; Shirinsky, Olga; Gagarina, Nina; Lyskina, Galina; Fukazawa, Ryuji; Owaga, Shunichi; Burns, Jane; Marsden, Alison

    2015-11-01

    Kawasaki Disease (KD) can result in coronary artery aneurysms (CAA) in up to 25% of patients, putting them at risk of thrombus formation, myocardial infarction and sudden death. Clinical guidelines recommend CAA diameter >8 mm as the arbitrary criterion for initiating systemic anticoagulation. KD patient specific modeling and flow simulations suggest that hemodynamic data can predict regions at increased risk of thrombosis. Transluminal Attenuation Gradient (TAG) is determined from the change in radiological attenuation per vessel length and has been proposed as a non-invasive method for characterizing coronary stenosis from CT Angiography. We hypothesized that CAA abnormal flow could be quantified using TAG. We computed hemodynamics for patient specific coronary models using a stabilized finite element method, coupled numerically to a lumped parameter network to model the heart and vascular boundary conditions. TAG was quantified in the major coronary arteries. We compared TAG for aneurysmal and normal arteries and we analyzed TAG correlation with hemodynamic and geometrical parameters. Our results suggest that TAG may provide hemodynamic data not available from anatomy alone. TAG represents a possible extension to standard CTA that could help to better evaluate the risk of thrombus formation in KD.

  4. Home-Based Risk of Falling Assessment Test Using a Closed-Loop Balance Model.

    PubMed

    Ayena, Johannes C; Zaibi, Helmi; Otis, Martin J-D; Menelas, Bob-Antoine J

    2016-12-01

    The aim of this study is to improve and facilitate the methods used to assess risk of falling at home among older people through the computation of a risk of falling in real time in daily activities. In order to increase a real time computation of the risk of falling, a closed-loop balance model is proposed and compared with One-Leg Standing Test (OLST). This balance model allows studying the postural response of a person having an unpredictable perturbation. Twenty-nine volunteers participated in this study for evaluating the effectiveness of the proposed system which includes seventeen elder participants: ten healthy elderly ( 68.4 ±5.5 years), seven Parkinson's disease (PD) subjects ( 66.28 ±8.9 years), and twelve healthy young adults ( 28.27 ±3.74 years). Our work suggests that there is a relationship between OLST score and the risk of falling based on center of pressure measurement with four low cost force sensors located inside an instrumented insole, which could be predicted using our suggested closed-loop balance model. For long term monitoring at home, this system could be included in a medical electronic record and could be useful as a diagnostic aid tool.

  5. Computational modeling of temperature elevation and thermoregulatory response in the brains of anesthetized rats locally exposed at 1.5 GHz

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Masuda, Hiroshi; Kanai, Yuya; Asai, Ryuichi; Fujiwara, Osamu; Arima, Takuji; Kawai, Hiroki; Watanabe, Soichi; Lagroye, Isabelle; Veyret, Bernard

    2011-12-01

    The dominant effect of human exposures to microwaves is caused by temperature elevation ('thermal effect'). In the safety guidelines/standards, the specific absorption rate averaged over a specific volume is used as a metric for human protection from localized exposure. Further investigation on the use of this metric is required, especially in terms of thermophysiology. The World Health Organization (2006 RF research agenda) has given high priority to research into the extent and consequences of microwave-induced temperature elevation in children. In this study, an electromagnetic-thermal computational code was developed to model electromagnetic power absorption and resulting temperature elevation leading to changes in active blood flow in response to localized 1.457 GHz exposure in rat heads. Both juvenile (4 week old) and young adult (8 week old) rats were considered. The computational code was validated against measurements for 4 and 8 week old rats. Our computational results suggest that the blood flow rate depends on both brain and core temperature elevations. No significant difference was observed between thermophysiological responses in 4 and 8 week old rats under these exposure conditions. The computational model developed herein is thus applicable to set exposure conditions for rats in laboratory investigations, as well as in planning treatment protocols in the thermal therapy.

  6. Computational modeling of radiobiological effects in bone metastases for different radionuclides.

    PubMed

    Liberal, Francisco D C Guerra; Tavares, Adriana Alexandre S; Tavares, João Manuel R S

    2017-06-01

    Computational simulation is a simple and practical way to study and to compare a variety of radioisotopes for different medical applications, including the palliative treatment of bone metastases. This study aimed to evaluate and compare cellular effects modelled for different radioisotopes currently in use or under research for treatment of bone metastases using computational methods. Computational models were used to estimate the radiation-induced cellular effects (Virtual Cell Radiobiology algorithm) post-irradiation with selected particles emitted by Strontium-89 ( 89 Sr), Samarium-153 ( 153 Sm), Lutetium-177 ( 177 Lu), and Radium-223 ( 223 Ra). Cellular kinetics post-irradiation using 89 Sr β - particles, 153 Sm β -  particles, 177 Lu β -  particles and 223 Ra α particles showed that the cell response was dose- and radionuclide-dependent. 177 Lu beta minus particles and, in particular, 223 Ra alpha particles, yielded the lowest survival fraction of all investigated particles. 223 Ra alpha particles induced the highest cell death of all investigated particles on metastatic prostate cells in comparison to irradiation with β -  radionuclides, two of the most frequently used radionuclides in the palliative treatment of bone metastases in clinical routine practice. Moreover, the data obtained suggest that the used computational methods might provide some perception about cellular effects following irradiation with different radionuclides.

  7. MESOSCOPIC MODELING OF STOCHASTIC REACTION-DIFFUSION KINETICS IN THE SUBDIFFUSIVE REGIME

    PubMed Central

    BLANC, EMILIE; ENGBLOM, STEFAN; HELLANDER, ANDREAS; LÖTSTEDT, PER

    2017-01-01

    Subdiffusion has been proposed as an explanation of various kinetic phenomena inside living cells. In order to fascilitate large-scale computational studies of subdiffusive chemical processes, we extend a recently suggested mesoscopic model of subdiffusion into an accurate and consistent reaction-subdiffusion computational framework. Two different possible models of chemical reaction are revealed and some basic dynamic properties are derived. In certain cases those mesoscopic models have a direct interpretation at the macroscopic level as fractional partial differential equations in a bounded time interval. Through analysis and numerical experiments we estimate the macroscopic effects of reactions under subdiffusive mixing. The models display properties observed also in experiments: for a short time interval the behavior of the diffusion and the reaction is ordinary, in an intermediate interval the behavior is anomalous, and at long times the behavior is ordinary again. PMID:29046618

  8. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions.

    PubMed

    Krajbich, Ian; Rangel, Antonio

    2011-08-16

    How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.

  9. Brain shift computation using a fully nonlinear biomechanical model.

    PubMed

    Wittek, Adam; Kikinis, Ron; Warfield, Simon K; Miller, Karol

    2005-01-01

    In the present study, fully nonlinear (i.e. accounting for both geometric and material nonlinearities) patient specific finite element brain model was applied to predict deformation field within the brain during the craniotomy-induced brain shift. Deformation of brain surface was used as displacement boundary conditions. Application of the computed deformation field to align (i.e. register) the preoperative images with the intraoperative ones indicated that the model very accurately predicts the displacements of gravity centers of the lateral ventricles and tumor even for very limited information about the brain surface deformation. These results are sufficient to suggest that nonlinear biomechanical models can be regarded as one possible way of complementing medical image processing techniques when conducting nonrigid registration. Important advantage of such models over the linear ones is that they do not require unrealistic assumptions that brain deformations are infinitesimally small and brain tissue stress-strain relationship is linear.

  10. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part II: gadolinium neutron capture therapy models and therapeutic effects.

    PubMed

    Wangerin, K; Culbertson, C N; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for gadolinium neutron capture therapy (GdNCT) related modeling. The validity of COG NCT model has been established for this model, and here the calculation was extended to analyze the effect of various gadolinium concentrations on dose distribution and cell-kill effect of the GdNCT modality and to determine the optimum therapeutic conditions for treating brain cancers. The computational results were compared with the widely used MCNP code. The differences between the COG and MCNP predictions were generally small and suggest that the COG code can be applied to similar research problems in NCT. Results for this study also showed that a concentration of 100 ppm gadolinium in the tumor was most beneficial when using an epithermal neutron beam.

  11. Masking of Figure-Ground Texture and Single Targets by Surround Inhibition: A Computational Spiking Model

    PubMed Central

    Supèr, Hans; Romeo, August

    2012-01-01

    A visual stimulus can be made invisible, i.e. masked, by the presentation of a second stimulus. In the sensory cortex, neural responses to a masked stimulus are suppressed, yet how this suppression comes about is still debated. Inhibitory models explain masking by asserting that the mask exerts an inhibitory influence on the responses of a neuron evoked by the target. However, other models argue that the masking interferes with recurrent or reentrant processing. Using computer modeling, we show that surround inhibition evoked by ON and OFF responses to the mask suppresses the responses to a briefly presented stimulus in forward and backward masking paradigms. Our model results resemble several previously described psychophysical and neurophysiological findings in perceptual masking experiments and are in line with earlier theoretical descriptions of masking. We suggest that precise spatiotemporal influence of surround inhibition is relevant for visual detection. PMID:22393370

  12. Cortico-hippocampal representations in simultaneous odor discrimination: a computational interpretation of Eichenbaum, Mathews, and Cohen (1989).

    PubMed

    Myers, C E; Gluck, M A

    1996-08-01

    A previous model of hippocampal region function in classical conditioning is generalized to H. Eichenbaum, A. Fagan, P. Mathews, and N.J. Cohen's (1989) and H. Eichenbaum, A. Fagan, and N.J. Cohen's (1989) simultaneous odor discrimination studies in rats. The model assumes that the hippocampal region forms new stimulus representations that compress redundant information while differentiating predictie information; the piriform (olfactory) cortex meanwhile clusters similar and co-occurring odors. Hippocampal damage interrupts the ability to differentiate odor representations, while leaving piriform-mediated odor clustering unchecked. The result is a net tendency to overcompress in the lesioned model. Behavior in the model is very similar to that of the rats, including lesion deficits, facilitation of successively learned tasks, and transfer performance. The computational mechanisms underlying model performance are consistent with the qualitative interpretations suggested by Eichen baum et al. to explain their empirical data.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dress, W.B.

    After a Rip-van-Winkle nap of more than 20 years, the ideas of biologically motivated computing are re-emerging. Instrumental to this awakening have been the highly publicized contributions of John Hopfield and major advances in the neurosciences. In 1982, Hopfield showed how a system of maximally coupled neutron-like elements described by a Hamiltonian formalism (a linear, conservative system) could behave in a manner startlingly suggestive of the way humans might go about solving problems and retrieving memories. Continuing advances in the neurosciences are providing a coherent basis in suggesting how nature's neurons might function. A particular model is described for anmore » artificial neural system designed to interact with (learn from and manipulate) a simulated (or real) environment. The model is based on early work by Iben Browning. The Browning model, designed to investigate computer-based intelligence, contains a particular simplification based on observations of frequency coding of information in the brain and information flow from receptors to the brain and back to effectors. The ability to act on and react to the environment was seen as an important principle, leading to self-organization of the system.« less

  14. Distributed user interfaces for clinical ubiquitous computing applications.

    PubMed

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  15. Combustion of hydrogen injected into a supersonic airstream (the SHIP computer program)

    NASA Technical Reports Server (NTRS)

    Markatos, N. C.; Spalding, D. B.; Tatchell, D. G.

    1977-01-01

    The mathematical and physical basis of the SHIP computer program which embodies a finite-difference, implicit numerical procedure for the computation of hydrogen injected into a supersonic airstream at an angle ranging from normal to parallel to the airstream main flow direction is described. The physical hypotheses built into the program include: a two-equation turbulence model, and a chemical equilibrium model for the hydrogen-oxygen reaction. Typical results for equilibrium combustion are presented and exhibit qualitatively plausible behavior. The computer time required for a given case is approximately 1 minute on a CDC 7600 machine. A discussion of the assumption of parabolic flow in the injection region is given which suggests that improvement in calculation in this region could be obtained by use of the partially parabolic procedure of Pratap and Spalding. It is concluded that the technique described herein provides the basis for an efficient and reliable means for predicting the effects of hydrogen injection into supersonic airstreams and of its subsequent combustion.

  16. A comprehensive Two-Fluid Model for Cavitation and Primary Atomization Modelling of liquid jets - Application to a large marine Diesel injector

    NASA Astrophysics Data System (ADS)

    Habchi, Chawki; Bohbot, Julien; Schmid, Andreas; Herrmann, Kai

    2015-12-01

    In this paper, a comprehensive two-fluid model is suggested in order to compute the in-nozzle cavitating flow and the primary atomization of liquid jets, simultaneously. This model has been applied to the computation of a typical large marine Diesel injector. The numerical results have shown a strong correlation between the in-nozzle cavitating flow and the ensuing spray orientation and atomization. Indeed, the results have confirmed the existence of an off-axis liquid core. This asymmetry is likely to be at the origin of the spray deviation observed experimentally. In addition, the primary atomization begins very close to the orifice exit as in the experiments, and the smallest droplets are generated due to cavitation pocket shape oscillations located at the same side, inside the orifice.

  17. Enterprise virtual private network (VPN) with dense wavelength division multiplexing (DWDM) design

    NASA Astrophysics Data System (ADS)

    Carranza, Aparicio

    An innovative computer simulation and modeling tool for metropolitan area optical data communication networks is presented. These models address the unique requirements of Virtual Private Networks for enterprise data centers, which may comprise a mixture of protocols including ESCON, FICON, Fibre Channel, Sysplex protocols (ETR, CLO, ISC); and other links interconnected over dark fiber using Dense Wavelength Division Multiplexing (DWDM). Our models have the capability of designing a network with minimal inputs; to compute optical link budgets; suggest alternative configurations; and also optimize the design based on user-defined performance metrics. The models make use of Time Division Multiplexing (TDM) wherever possible for lower data rate traffics. Simulation results for several configurations are presented and they have been validated by means of experiments conducted on the IBM enterprise network testbed in Poughkeepsie, N.Y.

  18. Use of the parameterised finite element method to robustly and efficiently evolve the edge of a moving cell.

    PubMed

    Neilson, Matthew P; Mackenzie, John A; Webb, Steven D; Insall, Robert H

    2010-11-01

    In this paper we present a computational tool that enables the simulation of mathematical models of cell migration and chemotaxis on an evolving cell membrane. Recent models require the numerical solution of systems of reaction-diffusion equations on the evolving cell membrane and then the solution state is used to drive the evolution of the cell edge. Previous work involved moving the cell edge using a level set method (LSM). However, the LSM is computationally very expensive, which severely limits the practical usefulness of the algorithm. To address this issue, we have employed the parameterised finite element method (PFEM) as an alternative method for evolving a cell boundary. We show that the PFEM is far more efficient and robust than the LSM. We therefore suggest that the PFEM potentially has an essential role to play in computational modelling efforts towards the understanding of many of the complex issues related to chemotaxis.

  19. The self streamlining wind tunnel. [wind tunnel walls

    NASA Technical Reports Server (NTRS)

    Goodyer, M. J.

    1975-01-01

    A two dimensional test section in a low speed wind tunnel capable of producing flow conditions free from wall interference is presented. Flexible top and bottom walls, and rigid sidewalls from which models were mounted spanning the tunnel are shown. All walls were unperforated, and the flexible walls were positioned by screw jacks. To eliminate wall interference, the wind tunnel itself supplied the information required in the streamlining process, when run with the model present. Measurements taken at the flexible walls were used by the tunnels computer check wall contours. Suitable adjustments based on streamlining criteria were then suggested by the computer. The streamlining criterion adopted when generating infinite flowfield conditions was a matching of static pressures in the test section at a wall with pressures computed for an imaginary inviscid flowfield passing over the outside of the same wall. Aerodynamic data taken on a cylindrical model operating under high blockage conditions are presented to illustrate the operation of the tunnel in its various modes.

  20. In vitro protease cleavage and computer simulations reveal the HIV-1 capsid maturation pathway

    NASA Astrophysics Data System (ADS)

    Ning, Jiying; Erdemci-Tandogan, Gonca; Yufenyuy, Ernest L.; Wagner, Jef; Himes, Benjamin A.; Zhao, Gongpu; Aiken, Christopher; Zandi, Roya; Zhang, Peijun

    2016-12-01

    HIV-1 virions assemble as immature particles containing Gag polyproteins that are processed by the viral protease into individual components, resulting in the formation of mature infectious particles. There are two competing models for the process of forming the mature HIV-1 core: the disassembly and de novo reassembly model and the non-diffusional displacive model. To study the maturation pathway, we simulate HIV-1 maturation in vitro by digesting immature particles and assembled virus-like particles with recombinant HIV-1 protease and monitor the process with biochemical assays and cryoEM structural analysis in parallel. Processing of Gag in vitro is accurate and efficient and results in both soluble capsid protein and conical or tubular capsid assemblies, seemingly converted from immature Gag particles. Computer simulations further reveal probable assembly pathways of HIV-1 capsid formation. Combining the experimental data and computer simulations, our results suggest a sequential combination of both displacive and disassembly/reassembly processes for HIV-1 maturation.

  1. A computational model of the cognitive impact of decorative elements on the perception of suspense

    NASA Astrophysics Data System (ADS)

    Delatorre, Pablo; León, Carlos; Gervás, Pablo; Palomo-Duarte, Manuel

    2017-10-01

    Suspense is a key narrative issue in terms of emotional gratification, influencing the way in which the audience experiences a story. Virtually all narrative media uses suspense as a strategy for reader engagement regardless of the technology or genre. Being such an important narrative component, computational creativity has tackled suspense in a number of automatic storytelling. These systems are mainly based on narrative theories, and in general lack a cognitive approach involving the study of empathy or emotional effect of the environment impact. With this idea in mind, this paper reports on a computational model of the influence of decorative elements on suspense. It has been developed as part of a more general proposal for plot generation based on cognitive aspects. In order to test and parameterise the model, an evaluation based on textual stories and an evaluation based on a 3D virtual environment was run. In both cases, results suggest a direct influence of emotional perception of decorative objects in the suspense of a scene.

  2. Mental models, metaphors and their use in the education of nurses.

    PubMed

    Burke, L M; Wilson, A M

    1997-11-01

    A great deal of nurses' confidence in the use of information technology (IT) depends both on the way computers are introduced to students in the college and how such education is continued and applied when they are practitioners. It is therefore vital that teachers of IT assist nurses to discover ways of learning to utilize and apply computers within their workplace with whatever methods are available. One method which has been introduced with success in other fields is the use of mental models and metaphors. Mental models and metaphors enable individuals to learn by building on past learning. Concepts and ideas which have already been internalized from past experience can be transferred and adapted for usage in a new learning situation with computers and technology. This article explores the use of mental models and metaphors for the technological education of nurses. The concepts themselves will be examined, followed by suggestions for possible applications specifically in the field of nursing and health care. Finally the role of the teacher in enabling improved learning as a result of these techniques will be addressed.

  3. Forward calculation of gravity and its gradient using polyhedral representation of density interfaces: an application of spherical or ellipsoidal topographic gravity effect

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chen, Chao

    2018-02-01

    A density interface modeling method using polyhedral representation is proposed to construct 3-D models of spherical or ellipsoidal interfaces such as the terrain surface of the Earth and applied to forward calculating gravity effect of topography and bathymetry for regional or global applications. The method utilizes triangular facets to fit undulation of the target interface. The model maintains almost equal accuracy and resolution at different locations of the globe. Meanwhile, the exterior gravitational field of the model, including its gravity and gravity gradients, is obtained simultaneously using analytic solutions. Additionally, considering the effect of distant relief, an adaptive computation process is introduced to reduce the computational burden. Then features and errors of the method are analyzed. Subsequently, the method is applied to an area for the ellipsoidal Bouguer shell correction as an example and the result is compared to existing methods, which shows our method provides high accuracy and great computational efficiency. Suggestions for further developments and conclusions are drawn at last.

  4. Computational Modeling of Fluid–Structure–Acoustics Interaction during Voice Production

    PubMed Central

    Jiang, Weili; Zheng, Xudong; Xue, Qian

    2017-01-01

    The paper presented a three-dimensional, first-principle based fluid–structure–acoustics interaction computer model of voice production, which employed a more realistic human laryngeal and vocal tract geometries. Self-sustained vibrations, important convergent–divergent vibration pattern of the vocal folds, and entrainment of the two dominant vibratory modes were captured. Voice quality-associated parameters including the frequency, open quotient, skewness quotient, and flow rate of the glottal flow waveform were found to be well within the normal physiological ranges. The analogy between the vocal tract and a quarter-wave resonator was demonstrated. The acoustic perturbed flux and pressure inside the glottis were found to be at the same order with their incompressible counterparts, suggesting strong source–filter interactions during voice production. Such high fidelity computational model will be useful for investigating a variety of pathological conditions that involve complex vibrations, such as vocal fold paralysis, vocal nodules, and vocal polyps. The model is also an important step toward a patient-specific surgical planning tool that can serve as a no-risk trial and error platform for different procedures, such as injection of biomaterials and thyroplastic medialization. PMID:28243588

  5. Women's decision to major in STEM fields

    NASA Astrophysics Data System (ADS)

    Conklin, Stephanie

    This paper explores the lived experiences of high school female students who choose to enter into STEM fields, and describes the influencing factors which steered these women towards majors in computer science, engineering and biology. Utilizing phenomenological methodology, this study seeks to understand the essence of women's decisions to enter into STEM fields and further describe how the decision-making process varies for women in high female enrollment fields, like biology, as compared with low enrollment fields like, computer science and engineering. Using Bloom's 3-Stage Theory, this study analyzes how relationships, experiences and barriers influenced women towards, and possibly away, from STEM fields. An analysis of women's experiences highlight that support of family, sustained experience in a STEM program during high school as well as the presence of an influential teacher were all salient factors in steering women towards STEM fields. Participants explained that influential teacher worked individually with them, modified and extended assignments and also steered participants towards coursework and experiences. This study also identifies factors, like guidance counselors as well as personal challenges, which inhibited participant's path to STEM fields. Further, through analyzing all six participants' experiences, it is clear that a linear model, like Bloom's 3-Stage Model, with limited ability to include potential barriers inhibited the ability to capture the essence of each participant's decision-making process. Therefore, a revised model with no linear progression which allows for emerging factors, like personal challenges, has been proposed; this model focuses on how interest in STEM fields begins to develop and is honed and then mastered. This study also sought to identify key differences in the paths of female students pursuing different majors. The findings of this study suggest that the path to computer science and engineering is limited. Computer science majors faced few, if any, challenges, hoped to use computers as a tool to innovate and also participated in the same computer science program. For female engineering students, the essence of their experience focused on interaction at a young age with an expert in an engineering-related field as well as a strong desire to help solve world problems using engineering. These participants were able to articulate clearly future careers. In contrast, biology majors, faced more challenges and were undecided about their future career goals. These results suggest that a longitudinal study focused on women pursuing engineering and computer science fields is warranted; this will hopefully allow these findings to be substantiated and also for refinement of the revised theoretical model.

  6. Computational Modeling of Micrometastatic Breast Cancer Radiation Dose Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Daniel L.; Debeb, Bisrat G.; Morgan Welch Inflammatory Breast Cancer Research Program and Clinic, The University of Texas MD Anderson Cancer Center, Houston, Texas

    Purpose: Prophylactic cranial irradiation (PCI) involves giving radiation to the entire brain with the goals of reducing the incidence of brain metastasis and improving overall survival. Experimentally, we have demonstrated that PCI prevents brain metastases in a breast cancer mouse model. We developed a computational model to expand on and aid in the interpretation of our experimental results. Methods and Materials: MATLAB was used to develop a computational model of brain metastasis and PCI in mice. Model input parameters were optimized such that the model output would match the experimental number of metastases per mouse from the unirradiated group. Anmore » independent in vivo–limiting dilution experiment was performed to validate the model. The effect of whole brain irradiation at different measurement points after tumor cells were injected was evaluated in terms of the incidence, number of metastases, and tumor burden and was then compared with the corresponding experimental data. Results: In the optimized model, the correlation between the number of metastases per mouse and the experimental fits was >95. Our attempt to validate the model with a limiting dilution assay produced 99.9% correlation with respect to the incidence of metastases. The model accurately predicted the effect of whole-brain irradiation given 3 weeks after cell injection but substantially underestimated its effect when delivered 5 days after cell injection. The model further demonstrated that delaying whole-brain irradiation until the development of gross disease introduces a dose threshold that must be reached before a reduction in incidence can be realized. Conclusions: Our computational model of mouse brain metastasis and PCI correlated strongly with our experiments with unirradiated mice. The results further suggest that early treatment of subclinical disease is more effective than irradiating established disease.« less

  7. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  8. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    An interactive computer program for automatically generating traffic models for the Space Transportation System (STS) is presented. Information concerning run stream construction, input data, and output data is provided. The flow of the interactive data stream is described. Error messages are specified, along with suggestions for remedial action. In addition, formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  9. Biomechanical stability analysis of the lambda-model controlling one joint.

    PubMed

    Lan, L; Zhu, K Y

    2007-06-01

    Computer modeling and control of the human motor system might be helpful for understanding the mechanism of human motor system and for the diagnosis and treatment of neuromuscular disorders. In this paper, a brief view of the equilibrium point hypothesis for human motor system modeling is given, and the lambda-model derived from this hypothesis is studied. The stability of the lambda-model based on equilibrium and Jacobian matrix is investigated. The results obtained in this paper suggest that the lambda-model is stable and has a unique equilibrium point under certain conditions.

  10. A mixed SIR-SIS model to contain a virus spreading through networks with two degrees

    NASA Astrophysics Data System (ADS)

    Essouifi, Mohamed; Achahbar, Abdelfattah

    Due to the fact that the “nodes” and “links” of real networks are heterogeneous, to model computer viruses prevalence throughout the Internet, we borrow the idea of the reduced scale free network which was introduced recently. The purpose of this paper is to extend the previous deterministic two subchains of Susceptible-Infected-Susceptible (SIS) model into a mixed Susceptible-Infected-Recovered and Susceptible-Infected-Susceptible (SIR-SIS) model to contain the computer virus spreading over networks with two degrees. Moreover, we develop its stochastic counterpart. Due to the high protection and security taken for hubs class, we suggest to treat it by using SIR epidemic model rather than the SIS one. The analytical study reveals that the proposed model admits a stable viral equilibrium. Thus, it is shown numerically that the mean dynamic behavior of the stochastic model is in agreement with the deterministic one. Unlike the infection densities i2 and i which both tend to a viral equilibrium for both approaches as in the previous study, i1 tends to the virus-free equilibrium. Furthermore, since a proportion of infectives are recovered, the global infection density i is minimized. Therefore, the permanent presence of viruses in the network due to the lower-degree nodes class. Many suggestions are put forward for containing viruses propagation and minimizing their damages.

  11. Cortico-striatal language pathways dynamically adjust for syntactic complexity: A computational study.

    PubMed

    Szalisznyó, Krisztina; Silverstein, David; Teichmann, Marc; Duffau, Hugues; Smits, Anja

    2017-01-01

    A growing body of literature supports a key role of fronto-striatal circuits in language perception. It is now known that the striatum plays a role in engaging attentional resources and linguistic rule computation while also serving phonological short-term memory capabilities. The ventral semantic and the dorsal phonological stream dichotomy assumed for spoken language processing also seems to play a role in cortico-striatal perception. Based on recent studies that correlate deep Broca-striatal pathways with complex syntax performance, we used a previously developed computational model of frontal-striatal syntax circuits and hypothesized that different parallel language pathways may contribute to canonical and non-canonical sentence comprehension separately. We modified and further analyzed a thematic role assignment task and corresponding reservoir computing model of language circuits, as previously developed by Dominey and coworkers. We examined the models performance under various parameter regimes, by influencing how fast the presented language input decays and altering the temporal dynamics of activated word representations. This enabled us to quantify canonical and non-canonical sentence comprehension abilities. The modeling results suggest that separate cortico-cortical and cortico-striatal circuits may be recruited differently for processing syntactically more difficult and less complicated sentences. Alternatively, a single circuit would need to dynamically and adaptively adjust to syntactic complexity. Copyright © 2016. Published by Elsevier Inc.

  12. Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.

    PubMed

    Poon, C S

    1991-01-01

    A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.

  13. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  14. Working Memory and Decision-Making in a Frontoparietal Circuit Model

    PubMed Central

    2017-01-01

    Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental “building blocks” of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks. PMID:29114071

  15. Working Memory and Decision-Making in a Frontoparietal Circuit Model.

    PubMed

    Murray, John D; Jaramillo, Jorge; Wang, Xiao-Jing

    2017-12-13

    Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental "building blocks" of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks. Copyright © 2017 the authors 0270-6474/17/3712167-20$15.00/0.

  16. Optimal dynamic remapping of parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Reynolds, Paul F., Jr.

    1987-01-01

    A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.

  17. Polarizable six-point water models from computational and empirical optimization.

    PubMed

    Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul

    2014-02-13

    Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to other temperatures and conditions like the melting of ice are also discussed.

  18. Computational Prediction of the Heterodimeric and Higher-Order Structure of gpE1/gpE2 Envelope Glycoproteins Encoded by Hepatitis C Virus.

    PubMed

    Freedman, Holly; Logan, Michael R; Hockman, Darren; Koehler Leman, Julia; Law, John Lok Man; Houghton, Michael

    2017-04-15

    Despite the recent success of newly developed direct-acting antivirals against hepatitis C, the disease continues to be a global health threat due to the lack of diagnosis of most carriers and the high cost of treatment. The heterodimer formed by glycoproteins E1 and E2 within the hepatitis C virus (HCV) lipid envelope is a potential vaccine candidate and antiviral target. While the structure of E1/E2 has not yet been resolved, partial crystal structures of the E1 and E2 ectodomains have been determined. The unresolved parts of the structure are within the realm of what can be modeled with current computational modeling tools. Furthermore, a variety of additional experimental data is available to support computational predictions of E1/E2 structure, such as data from antibody binding studies, cryo-electron microscopy (cryo-EM), mutational analyses, peptide binding analysis, linker-scanning mutagenesis, and nuclear magnetic resonance (NMR) studies. In accordance with these rich experimental data, we have built an in silico model of the full-length E1/E2 heterodimer. Our model supports that E1/E2 assembles into a trimer, which was previously suggested from a study by Falson and coworkers (P. Falson, B. Bartosch, K. Alsaleh, B. A. Tews, A. Loquet, Y. Ciczora, L. Riva, C. Montigny, C. Montpellier, G. Duverlie, E. I. Pecheur, M. le Maire, F. L. Cosset, J. Dubuisson, and F. Penin, J. Virol. 89:10333-10346, 2015, https://doi.org/10.1128/JVI.00991-15). Size exclusion chromatography and Western blotting data obtained by using purified recombinant E1/E2 support our hypothesis. Our model suggests that during virus assembly, the trimer of E1/E2 may be further assembled into a pentamer, with 12 pentamers comprising a single HCV virion. We anticipate that this new model will provide a useful framework for HCV envelope structure and the development of antiviral strategies. IMPORTANCE One hundred fifty million people have been estimated to be infected with hepatitis C virus, and many more are at risk for infection. A better understanding of the structure of the HCV envelope, which is responsible for attachment and fusion, could aid in the development of a vaccine and/or new treatments for this disease. We draw upon computational techniques to predict a full-length model of the E1/E2 heterodimer based on the partial crystal structures of the envelope glycoproteins E1 and E2. E1/E2 has been widely studied experimentally, and this provides valuable data, which has assisted us in our modeling. Our proposed structure is used to suggest the organization of the HCV envelope. We also present new experimental data from size exclusion chromatography that support our computational prediction of a trimeric oligomeric state of E1/E2. Copyright © 2017 American Society for Microbiology.

  19. Computational Prediction of the Heterodimeric and Higher-Order Structure of gpE1/gpE2 Envelope Glycoproteins Encoded by Hepatitis C Virus

    PubMed Central

    Logan, Michael R.; Hockman, Darren; Koehler Leman, Julia; Law, John Lok Man

    2017-01-01

    ABSTRACT Despite the recent success of newly developed direct-acting antivirals against hepatitis C, the disease continues to be a global health threat due to the lack of diagnosis of most carriers and the high cost of treatment. The heterodimer formed by glycoproteins E1 and E2 within the hepatitis C virus (HCV) lipid envelope is a potential vaccine candidate and antiviral target. While the structure of E1/E2 has not yet been resolved, partial crystal structures of the E1 and E2 ectodomains have been determined. The unresolved parts of the structure are within the realm of what can be modeled with current computational modeling tools. Furthermore, a variety of additional experimental data is available to support computational predictions of E1/E2 structure, such as data from antibody binding studies, cryo-electron microscopy (cryo-EM), mutational analyses, peptide binding analysis, linker-scanning mutagenesis, and nuclear magnetic resonance (NMR) studies. In accordance with these rich experimental data, we have built an in silico model of the full-length E1/E2 heterodimer. Our model supports that E1/E2 assembles into a trimer, which was previously suggested from a study by Falson and coworkers (P. Falson, B. Bartosch, K. Alsaleh, B. A. Tews, A. Loquet, Y. Ciczora, L. Riva, C. Montigny, C. Montpellier, G. Duverlie, E. I. Pecheur, M. le Maire, F. L. Cosset, J. Dubuisson, and F. Penin, J. Virol. 89:10333–10346, 2015, https://doi.org/10.1128/JVI.00991-15). Size exclusion chromatography and Western blotting data obtained by using purified recombinant E1/E2 support our hypothesis. Our model suggests that during virus assembly, the trimer of E1/E2 may be further assembled into a pentamer, with 12 pentamers comprising a single HCV virion. We anticipate that this new model will provide a useful framework for HCV envelope structure and the development of antiviral strategies. IMPORTANCE One hundred fifty million people have been estimated to be infected with hepatitis C virus, and many more are at risk for infection. A better understanding of the structure of the HCV envelope, which is responsible for attachment and fusion, could aid in the development of a vaccine and/or new treatments for this disease. We draw upon computational techniques to predict a full-length model of the E1/E2 heterodimer based on the partial crystal structures of the envelope glycoproteins E1 and E2. E1/E2 has been widely studied experimentally, and this provides valuable data, which has assisted us in our modeling. Our proposed structure is used to suggest the organization of the HCV envelope. We also present new experimental data from size exclusion chromatography that support our computational prediction of a trimeric oligomeric state of E1/E2. PMID:28148799

  20. Fuzzy logic of Aristotelian forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perlovsky, L.I.

    1996-12-31

    Model-based approaches to pattern recognition and machine vision have been proposed to overcome the exorbitant training requirements of earlier computational paradigms. However, uncertainties in data were found to lead to a combinatorial explosion of the computational complexity. This issue is related here to the roles of a priori knowledge vs. adaptive learning. What is the a-priori knowledge representation that supports learning? I introduce Modeling Field Theory (MFT), a model-based neural network whose adaptive learning is based on a priori models. These models combine deterministic, fuzzy, and statistical aspects to account for a priori knowledge, its fuzzy nature, and data uncertainties.more » In the process of learning, a priori fuzzy concepts converge to crisp or probabilistic concepts. The MFT is a convergent dynamical system of only linear computational complexity. Fuzzy logic turns out to be essential for reducing the combinatorial complexity to linear one. I will discuss the relationship of the new computational paradigm to two theories due to Aristotle: theory of Forms and logic. While theory of Forms argued that the mind cannot be based on ready-made a priori concepts, Aristotelian logic operated with just such concepts. I discuss an interpretation of MFT suggesting that its fuzzy logic, combining a-priority and adaptivity, implements Aristotelian theory of Forms (theory of mind). Thus, 2300 years after Aristotle, a logic is developed suitable for his theory of mind.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Versino, Daniele; Bronkhorst, Curt Allan

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  2. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.

  3. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926

  4. Computational upscaling of Drucker-Prager plasticity from micro-CT images of synthetic porous rock

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Sarout, Joel; Zhang, Minchao; Dautriat, Jeremie; Veveakis, Emmanouil; Regenauer-Lieb, Klaus

    2018-01-01

    Quantifying rock physical properties is essential for the mining and petroleum industry. Microtomography provides a new way to quantify the relationship between the microstructure and the mechanical and transport properties of a rock. Studies reporting the use microtomographic images to derive permeability and elastic moduli of rocks are common; only rare studies were devoted to yield and failure parameters using this technique. In this study, we simulate the macroscale plastic properties of a synthetic sandstone sample made of calcite-cemented quartz grains using the microscale information obtained from microtomography. The computations rely on the concept of representative volume elements (RVEs). The mechanical RVE is determined using the upper and lower bounds of finite-element computations for elasticity. We present computational upscaling methods from microphysical processes to extract the plasticity parameters of the RVE and compare results to experimental data. The yield stress, cohesion and internal friction angle of the matrix (solid part) of the rock were obtained with reasonable accuracy. Computations of plasticity of a series of models of different volume-sizes showed almost overlapping stress-strain curves, suggesting that the mechanical RVE determined by elastic computations is also valid for plastic yielding. Furthermore, a series of models were created by self-similarly inflating/deflating the porous models, that is keeping a similar structure while achieving different porosity values. The analysis of these models showed that yield stress, cohesion and internal friction angle linearly decrease with increasing porosity in the porosity range between 8 and 28 per cent. The internal friction angle decreases the most significantly, while cohesion remains stable.

  5. The effects of geometric uncertainties on computational modelling of knee biomechanics

    NASA Astrophysics Data System (ADS)

    Meng, Qingen; Fisher, John; Wilcox, Ruth

    2017-08-01

    The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models.

  6. Particle-Size-Grouping Model of Precipitation Kinetics in Microalloyed Steels

    NASA Astrophysics Data System (ADS)

    Xu, Kun; Thomas, Brian G.

    2012-03-01

    The formation, growth, and size distribution of precipitates greatly affects the microstructure and properties of microalloyed steels. Computational particle-size-grouping (PSG) kinetic models based on population balances are developed to simulate precipitate particle growth resulting from collision and diffusion mechanisms. First, the generalized PSG method for collision is explained clearly and verified. Then, a new PSG method is proposed to model diffusion-controlled precipitate nucleation, growth, and coarsening with complete mass conservation and no fitting parameters. Compared with the original population-balance models, this PSG method saves significant computation and preserves enough accuracy to model a realistic range of particle sizes. Finally, the new PSG method is combined with an equilibrium phase fraction model for plain carbon steels and is applied to simulate the precipitated fraction of aluminum nitride and the size distribution of niobium carbide during isothermal aging processes. Good matches are found with experimental measurements, suggesting that the new PSG method offers a promising framework for the future development of realistic models of precipitation.

  7. Learning of spatial relationships between observed and imitated actions allows invariant inverse computation in the frontal mirror neuron system.

    PubMed

    Oh, Hyuk; Gentili, Rodolphe J; Reggia, James A; Contreras-Vidal, José L

    2011-01-01

    It has been suggested that the human mirror neuron system can facilitate learning by imitation through coupling of observation and action execution. During imitation of observed actions, the functional relationship between and within the inferior frontal cortex, the posterior parietal cortex, and the superior temporal sulcus can be modeled within the internal model framework. The proposed biologically plausible mirror neuron system model extends currently available models by explicitly modeling the intraparietal sulcus and the superior parietal lobule in implementing the function of a frame of reference transformation during imitation. Moreover, the model posits the ventral premotor cortex as performing an inverse computation. The simulations reveal that: i) the transformation system can learn and represent the changes in extrinsic to intrinsic coordinates when an imitator observes a demonstrator; ii) the inverse model of the imitator's frontal mirror neuron system can be trained to provide the motor plans for the imitated actions.

  8. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  9. Reading Emotion From Mouse Cursor Motions: Affective Computing Approach.

    PubMed

    Yamauchi, Takashi; Xiao, Kunchen

    2018-04-01

    Affective computing research has advanced emotion recognition systems using facial expressions, voices, gaits, and physiological signals, yet these methods are often impractical. This study integrates mouse cursor motion analysis into affective computing and investigates the idea that movements of the computer cursor can provide information about emotion of the computer user. We extracted 16-26 trajectory features during a choice-reaching task and examined the link between emotion and cursor motions. Participants were induced for positive or negative emotions by music, film clips, or emotional pictures, and they indicated their emotions with questionnaires. Our 10-fold cross-validation analysis shows that statistical models formed from "known" participants (training data) could predict nearly 10%-20% of the variance of positive affect and attentiveness ratings of "unknown" participants, suggesting that cursor movement patterns such as the area under curve and direction change help infer emotions of computer users. Copyright © 2017 Cognitive Science Society, Inc.

  10. Modelling of pathologies of the nervous system by the example of computational and electronic models of elementary nervous systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumilov, V. N., E-mail: vnshumilov@rambler.ru; Syryamkin, V. I., E-mail: maximus70sir@gmail.com; Syryamkin, M. V., E-mail: maximus70sir@gmail.com

    The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervousmore » systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of formation of connections between neurons in simplest biological objects. Based on the correspondence of function of the created models to function of biological nervous systems we suggest the use of computational and electronic models of the brain for the study of its function under normal and pathological conditions, because operating principles of the models are built on principles imitating the function of biological nervous systems and the brain.« less

  11. Modelling of pathologies of the nervous system by the example of computational and electronic models of elementary nervous systems

    NASA Astrophysics Data System (ADS)

    Shumilov, V. N.; Syryamkin, V. I.; Syryamkin, M. V.

    2015-11-01

    The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervous systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of formation of connections between neurons in simplest biological objects. Based on the correspondence of function of the created models to function of biological nervous systems we suggest the use of computational and electronic models of the brain for the study of its function under normal and pathological conditions, because operating principles of the models are built on principles imitating the function of biological nervous systems and the brain.

  12. How accurate is automated gap filling of metabolic models?

    PubMed

    Karp, Peter D; Weaver, Daniel; Latendresse, Mario

    2018-06-19

    Reaction gap filling is a computational technique for proposing the addition of reactions to genome-scale metabolic models to permit those models to run correctly. Gap filling completes what are otherwise incomplete models that lack fully connected metabolic networks. The models are incomplete because they are derived from annotated genomes in which not all enzymes have been identified. Here we compare the results of applying an automated likelihood-based gap filler within the Pathway Tools software with the results of manually gap filling the same metabolic model. Both gap-filling exercises were applied to the same genome-derived qualitative metabolic reconstruction for Bifidobacterium longum subsp. longum JCM 1217, and to the same modeling conditions - anaerobic growth under four nutrients producing 53 biomass metabolites. The solution computed by the gap-filling program GenDev contained 12 reactions, but closer examination showed that solution was not minimal; two of the twelve reactions can be removed to yield a set of ten reactions that enable model growth. The manually curated solution contained 13 reactions, eight of which were shared with the 12-reaction computed solution. Thus, GenDev achieved recall of 61.5% and precision of 66.6%. These results suggest that although computational gap fillers are populating metabolic models with significant numbers of correct reactions, automatically gap-filled metabolic models also contain significant numbers of incorrect reactions. Our conclusion is that manual curation of gap-filler results is needed to obtain high-accuracy models. Many of the differences between the manual and automatic solutions resulted from using expert biological knowledge to direct the choice of reactions within the curated solution, such as reactions specific to the anaerobic lifestyle of B. longum.

  13. Computing with Neural Synchrony

    PubMed Central

    Brette, Romain

    2012-01-01

    Neurons communicate primarily with spikes, but most theories of neural computation are based on firing rates. Yet, many experimental observations suggest that the temporal coordination of spikes plays a role in sensory processing. Among potential spike-based codes, synchrony appears as a good candidate because neural firing and plasticity are sensitive to fine input correlations. However, it is unclear what role synchrony may play in neural computation, and what functional advantage it may provide. With a theoretical approach, I show that the computational interest of neural synchrony appears when neurons have heterogeneous properties. In this context, the relationship between stimuli and neural synchrony is captured by the concept of synchrony receptive field, the set of stimuli which induce synchronous responses in a group of neurons. In a heterogeneous neural population, it appears that synchrony patterns represent structure or sensory invariants in stimuli, which can then be detected by postsynaptic neurons. The required neural circuitry can spontaneously emerge with spike-timing-dependent plasticity. Using examples in different sensory modalities, I show that this allows simple neural circuits to extract relevant information from realistic sensory stimuli, for example to identify a fluctuating odor in the presence of distractors. This theory of synchrony-based computation shows that relative spike timing may indeed have computational relevance, and suggests new types of neural network models for sensory processing with appealing computational properties. PMID:22719243

  14. Assessing the Role of Inhibition in Stabilizing Neocortical Networks Requires Large-Scale Perturbation of the Inhibitory Population

    PubMed Central

    Mrsic-Flogel, Thomas D.

    2017-01-01

    Neurons within cortical microcircuits are interconnected with recurrent excitatory synaptic connections that are thought to amplify signals (Douglas and Martin, 2007), form selective subnetworks (Ko et al., 2011), and aid feature discrimination. Strong inhibition (Haider et al., 2013) counterbalances excitation, enabling sensory features to be sharpened and represented by sparse codes (Willmore et al., 2011). This balance between excitation and inhibition makes it difficult to assess the strength, or gain, of recurrent excitatory connections within cortical networks, which is key to understanding their operational regime and the computations that they perform. Networks that combine an unstable high-gain excitatory population with stabilizing inhibitory feedback are known as inhibition-stabilized networks (ISNs) (Tsodyks et al., 1997). Theoretical studies using reduced network models predict that ISNs produce paradoxical responses to perturbation, but experimental perturbations failed to find evidence for ISNs in cortex (Atallah et al., 2012). Here, we reexamined this question by investigating how cortical network models consisting of many neurons behave after perturbations and found that results obtained from reduced network models fail to predict responses to perturbations in more realistic networks. Our models predict that a large proportion of the inhibitory network must be perturbed to reliably detect an ISN regime robustly in cortex. We propose that wide-field optogenetic suppression of inhibition under promoters targeting a large fraction of inhibitory neurons may provide a perturbation of sufficient strength to reveal the operating regime of cortex. Our results suggest that detailed computational models of optogenetic perturbations are necessary to interpret the results of experimental paradigms. SIGNIFICANCE STATEMENT Many useful computational mechanisms proposed for cortex require local excitatory recurrence to be very strong, such that local inhibitory feedback is necessary to avoid epileptiform runaway activity (an “inhibition-stabilized network” or “ISN” regime). However, recent experimental results suggest that this regime may not exist in cortex. We simulated activity perturbations in cortical networks of increasing realism and found that, to detect ISN-like properties in cortex, large proportions of the inhibitory population must be perturbed. Current experimental methods for inhibitory perturbation are unlikely to satisfy this requirement, implying that existing experimental observations are inconclusive about the computational regime of cortex. Our results suggest that new experimental designs targeting a majority of inhibitory neurons may be able to resolve this question. PMID:29074575

  15. Effect of the mitral valve on diastolic flow patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seo, Jung Hee; Vedula, Vijay; Mittal, Rajat, E-mail: mittal@jhu.edu

    2014-12-15

    The leaflets of the mitral valve interact with the mitral jet and significantly impact diastolic flow patterns, but the effect of mitral valve morphology and kinematics on diastolic flow and its implications for left ventricular function have not been clearly delineated. In the present study, we employ computational hemodynamic simulations to understand the effect of mitral valve leaflets on diastolic flow. A computational model of the left ventricle is constructed based on a high-resolution contrast computed-tomography scan, and a physiological inspired model of the mitral valve leaflets is synthesized from morphological and echocardiographic data. Simulations are performed with a diodemore » type valve model as well as the physiological mitral valve model in order to delineate the effect of mitral-valve leaflets on the intraventricular flow. The study suggests that a normal physiological mitral valve promotes the formation of a circulatory (or “looped”) flow pattern in the ventricle. The mitral valve leaflets also increase the strength of the apical flow, thereby enhancing apical washout and mixing of ventricular blood. The implications of these findings on ventricular function as well as ventricular flow models are discussed.« less

  16. Nonlinear Visco-Elastic Response of Composites via Micro-Mechanical Models

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Sridharan, Srinivasan

    2005-01-01

    Micro-mechanical models for a study of nonlinear visco-elastic response of composite laminae are developed and their performance compared. A single integral constitutive law proposed by Schapery and subsequently generalized to multi-axial states of stress is utilized in the study for the matrix material. This is used in conjunction with a computationally facile scheme in which hereditary strains are computed using a recursive relation suggested by Henriksen. Composite response is studied using two competing micro-models, viz. a simplified Square Cell Model (SSCM) and a Finite Element based self-consistent Cylindrical Model (FECM). The algorithm is developed assuming that the material response computations are carried out in a module attached to a general purpose finite element program used for composite structural analysis. It is shown that the SSCM as used in investigations of material nonlinearity can involve significant errors in the prediction of transverse Young's modulus and shear modulus. The errors in the elastic strains thus predicted are of the same order of magnitude as the creep strains accruing due to visco-elasticity. The FECM on the other hand does appear to perform better both in the prediction of elastic constants and the study of creep response.

  17. Quantum protocols within Spekkens' toy model

    NASA Astrophysics Data System (ADS)

    Disilvestro, Leonardo; Markham, Damian

    2017-05-01

    Quantum mechanics is known to provide significant improvements in information processing tasks when compared to classical models. These advantages range from computational speedups to security improvements. A key question is where these advantages come from. The toy model developed by Spekkens [R. W. Spekkens, Phys. Rev. A 75, 032110 (2007), 10.1103/PhysRevA.75.032110] mimics many of the features of quantum mechanics, such as entanglement and no cloning, regarded as being important in this regard, despite being a local hidden variable theory. In this work, we study several protocols within Spekkens' toy model where we see it can also mimic the advantages and limitations shown in the quantum case. We first provide explicit proofs for the impossibility of toy bit commitment and the existence of a toy error correction protocol and consequent k -threshold secret sharing. Then, defining a toy computational model based on the quantum one-way computer, we prove the existence of blind and verified protocols. Importantly, these two last quantum protocols are known to achieve a better-than-classical security. Our results suggest that such quantum improvements need not arise from any Bell-type nonlocality or contextuality, but rather as a consequence of steering correlations.

  18. Intelligent Context-Aware and Adaptive Interface for Mobile LBS

    PubMed Central

    Liu, Yanhong

    2015-01-01

    Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users' demands in a complicated environment and suggested the feasibility by the experimental results. PMID:26457077

  19. Real-time simulation of biological soft tissues: a PGD approach.

    PubMed

    Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F

    2013-05-01

    We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.

  20. The role of synergies within generative models of action execution and recognition: A computational perspective. Comment on "Grasping synergies: A motor-control approach to the mirror neuron mechanism" by A. D'Ausilio et al.

    NASA Astrophysics Data System (ADS)

    Pezzulo, Giovanni; Donnarumma, Francesco; Iodice, Pierpaolo; Prevete, Roberto; Dindo, Haris

    2015-03-01

    Controlling the body - given its huge number of degrees of freedom - poses severe computational challenges. Mounting evidence suggests that the brain alleviates this problem by exploiting "synergies", or patterns of muscle activities (and/or movement dynamics and kinematics) that can be combined to control action, rather than controlling individual muscles of joints [1-10].

  1. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  2. Using computer simulations to facilitate conceptual understanding of electromagnetic induction

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Fen

    This study investigated the use of computer simulations to facilitate conceptual understanding in physics. The use of computer simulations in the present study was grounded in a conceptual framework drawn from findings related to the use of computer simulations in physics education. To achieve the goal of effective utilization of computers for physics education, I first reviewed studies pertaining to computer simulations in physics education categorized by three different learning frameworks and studies comparing the effects of different simulation environments. My intent was to identify the learning context and factors for successful use of computer simulations in past studies and to learn from the studies which did not obtain a significant result. Based on the analysis of reviewed literature, I proposed effective approaches to integrate computer simulations in physics education. These approaches are consistent with well established education principles such as those suggested by How People Learn (Bransford, Brown, Cocking, Donovan, & Pellegrino, 2000). The research based approaches to integrated computer simulations in physics education form a learning framework called Concept Learning with Computer Simulations (CLCS) in the current study. The second component of this study was to examine the CLCS learning framework empirically. The participants were recruited from a public high school in Beijing, China. All participating students were randomly assigned to two groups, the experimental (CLCS) group and the control (TRAD) group. Research based computer simulations developed by the physics education research group at University of Colorado at Boulder were used to tackle common conceptual difficulties in learning electromagnetic induction. While interacting with computer simulations, CLCS students were asked to answer reflective questions designed to stimulate qualitative reasoning and explanation. After receiving model reasoning online, students were asked to submit their revised answers electronically. Students in the TRAD group were not granted access to the CLCS material and followed their normal classroom routine. At the end of the study, both the CLCS and TRAD students took a post-test. Questions on the post-test were divided into "what" questions, "how" questions, and an open response question. Analysis of students' post-test performance showed mixed results. While the TRAD students scored higher on the "what" questions, the CLCS students scored higher on the "how" questions and the one open response questions. This result suggested that more TRAD students knew what kinds of conditions may or may not cause electromagnetic induction without understanding how electromagnetic induction works. Analysis of the CLCS students' learning also suggested that frequent disruption and technical trouble might pose threats to the effectiveness of the CLCS learning framework. Despite the mixed results of students' post-test performance, the CLCS learning framework revealed some limitations to promote conceptual understanding in physics. Improvement can be made by providing students with background knowledge necessary to understand model reasoning and incorporating the CLCS learning framework with other learning frameworks to promote integration of various physics concepts. In addition, the reflective questions in the CLCS learning framework may be refined to better address students' difficulties. Limitations of the study, as well as suggestions for future research, are also presented in this study.

  3. Recent developments in rotary-wing aerodynamic theory

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1986-01-01

    Current progress in the computational analysis of rotary-wing flowfields is surveyed, and some typical results are presented in graphs. Topics examined include potential theory, rotating coordinate systems, lifting-surface theory (moving singularity, fixed wing, and rotary wing), panel methods (surface singularity representations, integral equations, and compressible flows), transonic theory (the small-disturbance equation), wake analysis (hovering rotor-wake models and transonic blade-vortex interaction), limitations on computational aerodynamics, and viscous-flow methods (dynamic-stall theories and lifting-line theory). It is suggested that the present algorithms and advanced computers make it possible to begin working toward the ultimate goal of turbulent Navier-Stokes calculations for an entire rotorcraft.

  4. Computational aspects in mechanical modeling of the articular cartilage tissue.

    PubMed

    Mohammadi, Hadi; Mequanint, Kibret; Herzog, Walter

    2013-04-01

    This review focuses on the modeling of articular cartilage (at the tissue level), chondrocyte mechanobiology (at the cell level) and a combination of both in a multiscale computation scheme. The primary objective is to evaluate the advantages and disadvantages of conventional models implemented to study the mechanics of the articular cartilage tissue and chondrocytes. From monophasic material models as the simplest form to more complicated multiscale theories, these approaches have been frequently used to model articular cartilage and have contributed significantly to modeling joint mechanics, addressing and resolving numerous issues regarding cartilage mechanics and function. It should be noted that attentiveness is important when using different modeling approaches, as the choice of the model limits the applications available. In this review, we discuss the conventional models applicable to some of the mechanical aspects of articular cartilage such as lubrication, swelling pressure and chondrocyte mechanics and address some of the issues associated with the current modeling approaches. We then suggest future pathways for a more realistic modeling strategy as applied for the simulation of the mechanics of the cartilage tissue using multiscale and parallelized finite element method.

  5. Computer network environment planning and analysis

    NASA Technical Reports Server (NTRS)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  6. Neural modeling and functional neuroimaging.

    PubMed

    Horwitz, B; Sporns, O

    1994-01-01

    Two research areas that so far have had little interaction with one another are functional neuroimaging and computational neuroscience. The application of computational models and techniques to the inherently rich data sets generated by "standard" neurophysiological methods has proven useful for interpreting these data sets and for providing predictions and hypotheses for further experiments. We suggest that both theory- and data-driven computational modeling of neuronal systems can help to interpret data generated by functional neuroimaging methods, especially those used with human subjects. In this article, we point out four sets of questions, addressable by computational neuroscientists whose answere would be of value and interest to those who perform functional neuroimaging. The first set consist of determining the neurobiological substrate of the signals measured by functional neuroimaging. The second set concerns developing systems-level models of functional neuroimaging data. The third set of questions involves integrating functional neuroimaging data across modalities, with a particular emphasis on relating electromagnetic with hemodynamic data. The last set asks how one can relate systems-level models to those at the neuronal and neural ensemble levels. We feel that there are ample reasons to link functional neuroimaging and neural modeling, and that combining the results from the two disciplines will result in furthering our understanding of the central nervous system. © 1994 Wiley-Liss, Inc. This Article is a US Goverment work and, as such, is in the public domain in the United State of America. Copyright © 1994 Wiley-Liss, Inc.

  7. Hybrid soft computing systems for electromyographic signals analysis: a review.

    PubMed

    Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates

    2014-02-03

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.

  8. Hybrid soft computing systems for electromyographic signals analysis: a review

    PubMed Central

    2014-01-01

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979

  9. Computational neurobiology is a useful tool in translational neurology: the example of ataxia

    PubMed Central

    Brown, Sherry-Ann; McCullough, Louise D.; Loew, Leslie M.

    2014-01-01

    Hereditary ataxia, or motor incoordination, affects approximately 150,000 Americans and hundreds of thousands of individuals worldwide with onset from as early as mid-childhood. Affected individuals exhibit dysarthria, dysmetria, action tremor, and diadochokinesia. In this review, we consider an array of computational studies derived from experimental observations relevant to human neuropathology. A survey of related studies illustrates the impact of integrating clinical evidence with data from mouse models and computational simulations. Results from these studies may help explain findings in mice, and after extensive laboratory study, may ultimately be translated to ataxic individuals. This inquiry lays a foundation for using computation to understand neurobiochemical and electrophysiological pathophysiology of spinocerebellar ataxias and may contribute to development of therapeutics. The interdisciplinary analysis suggests that computational neurobiology can be an important tool for translational neurology. PMID:25653585

  10. Micrometric precision of prosthetic dental crowns obtained by optical scanning and computer-aided designing/computer-aided manufacturing system

    NASA Astrophysics Data System (ADS)

    das Neves, Flávio Domingues; de Almeida Prado Naves Carneiro, Thiago; do Prado, Célio Jesus; Prudente, Marcel Santana; Zancopé, Karla; Davi, Letícia Resende; Mendonça, Gustavo; Soares, Carlos José

    2014-08-01

    The current study evaluated prosthetic dental crowns obtained by optical scanning and a computer-aided designing/computer-aided manufacturing system using micro-computed tomography to compare the marginal fit. The virtual models were obtained with four different scanning surfaces: typodont (T), regular impressions (RI), master casts (MC), and powdered master casts (PMC). Five virtual models were obtained for each group. For each model, a crown was designed on the software and milled from feldspathic ceramic blocks. Micro-CT images were obtained for marginal gap measurements and the data were statistically analyzed by one-way analysis of variance followed by Tukey's test. The mean vertical misfit was T=62.6±65.2 μm; MC=60.4±38.4 μm; PMC=58.1±38.0 μm, and RI=89.8±62.8 μm. Considering a percentage of vertical marginal gap of up to 75 μm, the results were T=71.5%, RI=49.2%, MC=69.6%, and PMC=71.2%. The percentages of horizontal overextension were T=8.5%, RI=0%, MC=0.8%, and PMC=3.8%. Based on the results, virtual model acquisition by scanning the typodont (simulated mouth) or MC, with or without powder, showed acceptable values for the marginal gap. The higher result of marginal gap of the RI group suggests that it is preferable to scan this directly from the mouth or from MC.

  11. Addressing the translational dilemma: dynamic knowledge representation of inflammation using agent-based modeling.

    PubMed

    An, Gary; Christley, Scott

    2012-01-01

    Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.

  12. Mathematical modeling and computational prediction of cancer drug resistance.

    PubMed

    Sun, Xiaoqiang; Hu, Bin

    2017-06-23

    Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Mind, Machine, and Creativity: An Artist's Perspective.

    PubMed

    Sundararajan, Louise

    2014-06-01

    Harold Cohen is a renowned painter who has developed a computer program, AARON, to create art. While AARON has been hailed as one of the most creative AI programs, Cohen consistently rejects the claims of machine creativity. Questioning the possibility for AI to model human creativity, Cohen suggests in so many words that the human mind takes a different route to creativity, a route that privileges the relational, rather than the computational, dimension of cognition. This unique perspective on the tangled web of mind, machine, and creativity is explored by an application of three relational models of the mind to an analysis of Cohen's talks and writings, which are available on his website: www.aaronshome.com.

  14. From systems biology to dynamical neuropharmacology: proposal for a new methodology.

    PubMed

    Erdi, P; Kiss, T; Tóth, J; Ujfalussy, B; Zalányi, L

    2006-07-01

    The concepts and methods of systems biology are extended to neuropharmacology in order to test and design drugs for the treatment of neurological and psychiatric disorders. Computational modelling by integrating compartmental neural modelling techniques and detailed kinetic descriptions of pharmacological modulation of transmitter-receptor interaction is offered as a method to test the electrophysiological and behavioural effects of putative drugs. Even more, an inverse method is suggested as a method for controlling a neural system to realise a prescribed temporal pattern. In particular, as an application of the proposed new methodology, a computational platform is offered to analyse the generation and pharmacological modulation of theta rhythm related to anxiety.

  15. Baryon magnetic moments: Symmetries and relations

    NASA Astrophysics Data System (ADS)

    Parreño, Assumpta; Savage, Martin J.; Tiburzi, Brian C.; Wilhelm, Jonas; Chang, Emmanuel; Detmold, William; Orginos, Kostas

    2018-03-01

    Magnetic moments of the octet baryons are computed using lattice QCD in background magnetic fields, including the first treatment of the magnetically coupled ∑0- ⋀ system. Although the computations are performed for relatively large values of the up and down quark masses, we gain new insight into the symmetries and relations between magnetic moments by working at a three-flavor mass-symmetric point. While the spinflavor symmetry in the large Nc limit of QCD is shared by the naïve constituent quark model, we find instances where quark model predictions are considerably favored over those emerging in the large Nc limit. We suggest further calculations that would shed light on the curious patterns of baryon magnetic moments.

  16. Mind, Machine, and Creativity: An Artist's Perspective

    PubMed Central

    Sundararajan, Louise

    2014-01-01

    Harold Cohen is a renowned painter who has developed a computer program, AARON, to create art. While AARON has been hailed as one of the most creative AI programs, Cohen consistently rejects the claims of machine creativity. Questioning the possibility for AI to model human creativity, Cohen suggests in so many words that the human mind takes a different route to creativity, a route that privileges the relational, rather than the computational, dimension of cognition. This unique perspective on the tangled web of mind, machine, and creativity is explored by an application of three relational models of the mind to an analysis of Cohen's talks and writings, which are available on his website: www.aaronshome.com. PMID:25541564

  17. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    PubMed

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. Structure and State of Stress of the Chilean Subduction Zone from Terrestrial and Satellite-Derived Gravity and Gravity Gradient Data

    NASA Astrophysics Data System (ADS)

    Gutknecht, B. D.; Götze, H.-J.; Jahr, T.; Jentzsch, G.; Mahatsente, R.; Zeumann, St.

    2014-11-01

    It is well known that the quality of gravity modelling of the Earth's lithosphere is heavily dependent on the limited number of available terrestrial gravity data. More recently, however, interest has grown within the geoscientific community to utilise the homogeneously measured satellite gravity and gravity gradient data for lithospheric scale modelling. Here, we present an interdisciplinary approach to determine the state of stress and rate of deformation in the Central Andean subduction system. We employed gravity data from terrestrial, satellite-based and combined sources using multiple methods to constrain stress, strain and gravitational potential energy (GPE). Well-constrained 3D density models, which were partly optimised using the combined regional gravity model IMOSAGA01C (Hosse et al. in Surv Geophys, 2014, this issue), were used as bases for the computation of stress anomalies on the top of the subducting oceanic Nazca plate and GPE relative to the base of the lithosphere. The geometries and physical parameters of the 3D density models were used for the computation of stresses and uplift rates in the dynamic modelling. The stress distributions, as derived from the static and dynamic modelling, reveal distinct positive anomalies of up to 80 MPa along the coastal Jurassic batholith belt. The anomalies correlate well with major seismicity in the shallow parts of the subduction system. Moreover, the pattern of stress distributions in the Andean convergent zone varies both along the north-south and west-east directions, suggesting that the continental fore-arc is highly segmented. Estimates of GPE show that the high Central Andes might be in a state of horizontal deviatoric tension. Models of gravity gradients from the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite mission were used to compute Bouguer-like gradient anomalies at 8 km above sea level. The analysis suggests that data from GOCE add significant value to the interpretation of lithospheric structures, given that the appropriate topographic correction is applied.

  19. How should a speech recognizer work?

    PubMed

    Scharenborg, Odette; Norris, Dennis; Bosch, Louis; McQueen, James M

    2005-11-12

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input. 2005 Lawrence Erlbaum Associates, Inc.

  20. Using computer simulation to improve high order thinking skills of physics teacher candidate students in Compton effect

    NASA Astrophysics Data System (ADS)

    Supurwoko; Cari; Sarwanto; Sukarmin; Fauzi, Ahmad; Faradilla, Lisa; Summa Dewi, Tiarasita

    2017-11-01

    The process of learning and teaching in Physics is often confronted with abstract concepts. It makes difficulty for students to understand and teachers to teach the concept. One of the materials that has an abstract concept is Compton Effect. The purpose of this research is to evaluate computer simulation model on Compton Effect material which is used to improve high thinking ability of Physics teacher candidate students. This research is a case study. The subject is students at physics educations who have attended Modern Physics lectures. Data were obtained through essay test for measuring students’ high-order thinking skills and quisioners for measuring students’ responses. The results obtained indicate that computer simulation model can be used to improve students’ high order thinking skill and can be used to improve students’ responses. With this result it is suggested that the audiences use the simulation media in learning

  1. Attentional Episodes in Visual Perception

    ERIC Educational Resources Information Center

    Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…

  2. Dendrites of dentate gyrus granule cells contribute to pattern separation by controlling sparsity

    PubMed Central

    Chavlis, Spyridon; Petrantonakis, Panagiotis C.

    2016-01-01

    ABSTRACT The hippocampus plays a key role in pattern separation, the process of transforming similar incoming information to highly dissimilar, nonverlapping representations. Sparse firing granule cells (GCs) in the dentate gyrus (DG) have been proposed to undertake this computation, but little is known about which of their properties influence pattern separation. Dendritic atrophy has been reported in diseases associated with pattern separation deficits, suggesting a possible role for dendrites in this phenomenon. To investigate whether and how the dendrites of GCs contribute to pattern separation, we build a simplified, biologically relevant, computational model of the DG. Our model suggests that the presence of GC dendrites is associated with high pattern separation efficiency while their atrophy leads to increased excitability and performance impairments. These impairments can be rescued by restoring GC sparsity to control levels through various manipulations. We predict that dendrites contribute to pattern separation as a mechanism for controlling sparsity. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27784124

  3. Sound texture perception via statistics of the auditory periphery: Evidence from sound synthesis

    PubMed Central

    McDermott, Josh H.; Simoncelli, Eero P.

    2014-01-01

    Rainstorms, insect swarms, and galloping horses produce “sound textures” – the collective result of many similar acoustic events. Sound textures are distinguished by temporal homogeneity, suggesting they could be recognized with time-averaged statistics. To test this hypothesis, we processed real-world textures with an auditory model containing filters tuned for sound frequencies and their modulations, and measured statistics of the resulting decomposition. We then assessed the realism and recognizability of novel sounds synthesized to have matching statistics. Statistics of individual frequency channels, capturing spectral power and sparsity, generally failed to produce compelling synthetic textures. However, combining them with correlations between channels produced identifiable and natural-sounding textures. Synthesis quality declined if statistics were computed from biologically implausible auditory models. The results suggest that sound texture perception is mediated by relatively simple statistics of early auditory representations, presumably computed by downstream neural populations. The synthesis methodology offers a powerful tool for their further investigation. PMID:21903084

  4. Initiation and Modification of Reaction by Energy Addition: Kinetic and Transport Phenomena

    DTIC Science & Technology

    1990-10-01

    ignition- delay time ranges from about 2 to 100 ps. The results of a computer- modeling calcu- lation of the chemical kinetics suggest that the...Page PROGRAM INFORMATION iii 1.0 RESEARCH OBJECTIVES 2.0 ANALYSIS 2 3.0 EXPERIMENT 7 REFERENCES 8 APPENDIX I. Evaluating a Simple Model for Laminar...Flame-Propagation I-1 Rates. I. Planar Geometry. APPENDIX II. Evaluating a Simple Model for Laminar-Flame-Propagation II-1 Rates. II. Spherical

  5. Mathematical neuroscience: from neurons to circuits to systems.

    PubMed

    Gutkin, Boris; Pinto, David; Ermentrout, Bard

    2003-01-01

    Applications of mathematics and computational techniques to our understanding of neuronal systems are provided. Reduction of membrane models to simplified canonical models demonstrates how neuronal spike-time statistics follow from simple properties of neurons. Averaging over space allows one to derive a simple model for the whisker barrel circuit and use this to explain and suggest several experiments. Spatio-temporal pattern formation methods are applied to explain the patterns seen in the early stages of drug-induced visual hallucinations.

  6. The Application of Systems Analysis and Mathematical Models to the Study of Erythropoiesis During Space Flight

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    Included in the report are: (1) review of the erythropoietic mechanisms; (2) an evaluation of existing models for the control of erythropoiesis; (3) a computer simulation of the model's response to hypoxia; (4) an hypothesis to explain observed decreases in red blood cell mass during weightlessness; (5) suggestions for further research; and (6) an assessment of the role that systems analysis can play in the Skylab hematological program.

  7. Learning and evolution in bacterial taxis: an operational amplifier circuit modeling the computational dynamics of the prokaryotic 'two component system' protein network.

    PubMed

    Di Paola, Vieri; Marijuán, Pedro C; Lahoz-Beltra, Rafael

    2004-01-01

    Adaptive behavior in unicellular organisms (i.e., bacteria) depends on highly organized networks of proteins governing purposefully the myriad of molecular processes occurring within the cellular system. For instance, bacteria are able to explore the environment within which they develop by utilizing the motility of their flagellar system as well as a sophisticated biochemical navigation system that samples the environmental conditions surrounding the cell, searching for nutrients or moving away from toxic substances or dangerous physical conditions. In this paper we discuss how proteins of the intervening signal transduction network could be modeled as artificial neurons, simulating the dynamical aspects of the bacterial taxis. The model is based on the assumption that, in some important aspects, proteins can be considered as processing elements or McCulloch-Pitts artificial neurons that transfer and process information from the bacterium's membrane surface to the flagellar motor. This simulation of bacterial taxis has been carried out on a hardware realization of a McCulloch-Pitts artificial neuron using an operational amplifier. Based on the behavior of the operational amplifier we produce a model of the interaction between CheY and FliM, elements of the prokaryotic two component system controlling chemotaxis, as well as a simulation of learning and evolution processes in bacterial taxis. On the one side, our simulation results indicate that, computationally, these protein 'switches' are similar to McCulloch-Pitts artificial neurons, suggesting a bridge between evolution and learning in dynamical systems at cellular and molecular levels and the evolutive hardware approach. On the other side, important protein 'tactilizing' properties are not tapped by the model, and this suggests further complexity steps to explore in the approach to biological molecular computing.

  8. Fatigue in isometric contraction in a single muscle fibre: a compartmental calcium ion flow model.

    PubMed

    Kothiyal, K P; Ibramsha, M

    1986-01-01

    Fatigue in muscle is a complex biological phenomenon which has so far eluded a definite explanation. Many biochemical and physiological models have been suggested in the literature to account for the decrement in the ability of muscle to sustain a given level of force for a long time. Some of these models have been critically analysed in this paper and are shown to be not able to explain all the experimental observations. A new compartmental model based on the intracellular calcium ion movement in muscle is proposed to study the mechanical responses of a muscle fibre. Computer simulation is performed to obtain model responses in isometric contraction to an impulse and a train of stimuli of long duration. The simulated curves have been compared with experimentally observed mechanical responses of the semitendinosus muscle fibre of Rana pipiens. The comparison of computed and observed responses indicates that the proposed calcium ion model indeed accounts very well for the muscle fatigue.

  9. Beyond excitation/inhibition imbalance in multidimensional models of neural circuit changes in brain disorders.

    PubMed

    O'Donnell, Cian; Gonçalves, J Tiago; Portera-Cailliau, Carlos; Sejnowski, Terrence J

    2017-10-11

    A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca 2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits.

  10. Beyond excitation/inhibition imbalance in multidimensional models of neural circuit changes in brain disorders

    PubMed Central

    Gonçalves, J Tiago; Portera-Cailliau, Carlos

    2017-01-01

    A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits. PMID:29019321

  11. Inference of sigma factor controlled networks by using numerical modeling applied to microarray time series data of the germinating prokaryote.

    PubMed

    Strakova, Eva; Zikova, Alice; Vohradsky, Jiri

    2014-01-01

    A computational model of gene expression was applied to a novel test set of microarray time series measurements to reveal regulatory interactions between transcriptional regulators represented by 45 sigma factors and the genes expressed during germination of a prokaryote Streptomyces coelicolor. Using microarrays, the first 5.5 h of the process was recorded in 13 time points, which provided a database of gene expression time series on genome-wide scale. The computational modeling of the kinetic relations between the sigma factors, individual genes and genes clustered according to the similarity of their expression kinetics identified kinetically plausible sigma factor-controlled networks. Using genome sequence annotations, functional groups of genes that were predominantly controlled by specific sigma factors were identified. Using external binding data complementing the modeling approach, specific genes involved in the control of the studied process were identified and their function suggested.

  12. Computational model of polarized actin cables and cytokinetic actin ring formation in budding yeast

    PubMed Central

    Tang, Haosu; Bidone, Tamara C.

    2015-01-01

    The budding yeast actin cables and contractile ring are important for polarized growth and division, revealing basic aspects of cytoskeletal function. To study these formin-nucleated structures, we built a 3D computational model with actin filaments represented as beads connected by springs. Polymerization by formins at the bud tip and bud neck, crosslinking, severing, and myosin pulling, are included. Parameter values were estimated from prior experiments. The model generates actin cable structures and dynamics similar to those of wild type and formin deletion mutant cells. Simulations with increased polymerization rate result in long, wavy cables. Simulated pulling by type V myosin stretches actin cables. Increasing the affinity of actin filaments for the bud neck together with reduced myosin V pulling promotes the formation of a bundle of antiparallel filaments at the bud neck, which we suggest as a model for the assembly of actin filaments to the contractile ring. PMID:26538307

  13. The modeling and simulation of visuospatial working memory

    PubMed Central

    Liang, Lina; Zhang, Zhikang

    2010-01-01

    Camperi and Wang (Comput Neurosci 5:383–405, 1998) presented a network model for working memory that combines intrinsic cellular bistability with the recurrent network architecture of the neocortex. While Fall and Rinzel (Comput Neurosci 20:97–107, 2006) replaced this intrinsic bistability with a biological mechanism-Ca2+ release subsystem. In this study, we aim to further expand the above work. We integrate the traditional firing-rate network with Ca2+ subsystem-induced bistability, amend the synaptic weights and suggest that Ca2+ concentration only increase the efficacy of synaptic input but has nothing to do with the external input for the transient cue. We found that our network model maintained the persistent activity in response to a brief transient stimulus like that of the previous two models and the working memory performance was resistant to noise and distraction stimulus if Ca2+ subsystem was tuned to be bistable. PMID:22132045

  14. A computational model unifies apparently contradictory findings concerning phantom pain

    PubMed Central

    Boström, Kim J.; de Lussanet, Marc H. E.; Weiss, Thomas; Puta, Christian; Wagner, Heiko

    2014-01-01

    Amputation often leads to painful phantom sensations, whose pathogenesis is still unclear. Supported by experimental findings, an explanatory model has been proposed that identifies maladaptive reorganization of the primary somatosensory cortex (S1) as a cause of phantom pain. However, it was recently found that BOLD activity during voluntary movements of the phantom positively correlates with phantom pain rating, giving rise to a model of persistent representation. In the present study, we develop a physiologically realistic, computational model to resolve the conflicting findings. Simulations yielded that both the amount of reorganization and the level of cortical activity during phantom movements were enhanced in a scenario with strong phantom pain as compared to a scenario with weak phantom pain. These results suggest that phantom pain, maladaptive reorganization, and persistent representation may all be caused by the same underlying mechanism, which is driven by an abnormally enhanced spontaneous activity of deafferented nociceptive channels. PMID:24931344

  15. A self-taught artificial agent for multi-physics computational model personalization.

    PubMed

    Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin

    2016-12-01

    Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model. Copyright © 2016. Published by Elsevier B.V.

  16. Superimposition of 3-dimensional cone-beam computed tomography models of growing patients

    PubMed Central

    Cevidanes, Lucia H. C.; Heymann, Gavin; Cornelis, Marie A.; DeClerck, Hugo J.; Tulloch, J. F. Camilla

    2009-01-01

    Introduction The objective of this study was to evaluate a new method for superimposition of 3-dimensional (3D) models of growing subjects. Methods Cone-beam computed tomography scans were taken before and after Class III malocclusion orthopedic treatment with miniplates. Three observers independently constructed 18 3D virtual surface models from cone-beam computed tomography scans of 3 patients. Separate 3D models were constructed for soft-tissue, cranial base, maxillary, and mandibular surfaces. The anterior cranial fossa was used to register the 3D models of before and after treatment (about 1 year of follow-up). Results Three-dimensional overlays of superimposed models and 3D color-coded displacement maps allowed visual and quantitative assessment of growth and treatment changes. The range of interobserver errors for each anatomic region was 0.4 mm for the zygomatic process of maxilla, chin, condyles, posterior border of the rami, and lower border of the mandible, and 0.5 mm for the anterior maxilla soft-tissue upper lip. Conclusions Our results suggest that this method is a valid and reproducible assessment of treatment outcomes for growing subjects. This technique can be used to identify maxillary and mandibular positional changes and bone remodeling relative to the anterior cranial fossa. PMID:19577154

  17. Inhibitory competition between shape properties in figure-ground perception.

    PubMed

    Peterson, Mary A; Skow, Emily

    2008-04-01

    Theories of figure-ground perception entail inhibitory competition between either low-level units (edge or feature units) or high-level shape properties. Extant computational models instantiate the 1st type of theory. The authors investigated a prediction of the 2nd type of theory: that shape properties suggested on the ground side of an edge are suppressed when they lose the figure-ground competition. In Experiment 1, the authors present behavioral evidence of the predicted suppression: Object decisions were slower for line drawings that followed silhouettes suggesting portions of objects from the same rather than a different category on their ground sides. In Experiment 2, the authors reversed the silhouette's figure-ground relationships and obtained speeding rather than slowing in the same category condition, thereby demonstrating that the Experiment 1 results reflect suppression of those shape properties that lose the figure-ground competition. These experiments provide the first clear empirical evidence that figure-ground perception entails inhibitory competition between high-level shape properties and demonstrate the need for amendments to existing computational models. Furthermore, these results suggest that figure-ground perception may itself be an instance of biased competition in shape perception. (Copyright) 2008 APA, all rights reserved.

  18. Adaptive allocation of decisionmaking responsibility between human and computer in multitask situations

    NASA Technical Reports Server (NTRS)

    Chu, Y.-Y.; Rouse, W. B.

    1979-01-01

    As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.

  19. An Improved Computing Method for 3D Mechanical Connectivity Rates Based on a Polyhedral Simulation Model of Discrete Fracture Network in Rock Masses

    NASA Astrophysics Data System (ADS)

    Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye

    2018-06-01

    Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.

  20. SU-D-BRC-05: Effects of Motion and Variable RBE in a Lung Patient Treated with Passively Scattered Protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirkovic, D; Titt, U; Mohan, R

    2016-06-15

    Purpose: To evaluate effects of motion and variable relative biological effectiveness (RBE) in a lung cancer patient treated with passively scattered proton therapy using dose volume histograms associated with patient dose computed using three different methods. Methods: A proton treatment plan of a lung cancer patient optimized using clinical treatment planning system (TPS) was used to construct a detailed Monte Carlo (MC) model of the beam delivery system and the patient specific aperture and compensator. A phase space file containing all particles transported through the beam line was collected at the distal surface of the range compensator and subsequently transportedmore » through two different patient models. The first model was based on the average CT used by the TPS and the second model included all 10 phases of the corresponding 4DCT. The physical dose and proton linear energy transfer (LET) were computed in each voxel of two models and used to compute constant and variable RBE MC dose on average CT and 4D CT. The MC computed doses were compared to the TPS dose using dose volume histograms for relevant structures. Results: The results show significant differences in doses to the target and critical structures suggesting the need for more accurate proton dose computation methods. In particular, the 4D dose shows reduced coverage of the target and higher dose to the spinal cord, while variable RBE dose shows higher lung dose. Conclusion: The methodology developed in this pilot study is currently used for the analysis of a cohort of ∼90 lung patients from a clinical trial comparing proton and photon therapy for lung cancer. The results from this study will help us in determining the clinical significance of more accurate dose computation models in proton therapy.« less

  1. Applying Computer Models to Realize Closed-Loop Neonatal Oxygen Therapy.

    PubMed

    Morozoff, Edmund; Smyth, John A; Saif, Mehrdad

    2017-01-01

    Within the context of automating neonatal oxygen therapy, this article describes the transformation of an idea verified by a computer model into a device actuated by a computer model. Computer modeling of an entire neonatal oxygen therapy system can facilitate the development of closed-loop control algorithms by providing a verification platform and speeding up algorithm development. In this article, we present a method of mathematically modeling the system's components: the oxygen transport within the patient, the oxygen blender, the controller, and the pulse oximeter. Furthermore, within the constraints of engineering a product, an idealized model of the neonatal oxygen transport component may be integrated effectively into the control algorithm of a device, referred to as the adaptive model. Manual and closed-loop oxygen therapy performance were defined in this article by 3 criteria in the following order of importance: percent duration of SpO2 spent in normoxemia (target SpO2 ± 2.5%), hypoxemia (less than normoxemia), and hyperoxemia (more than normoxemia); number of 60-second periods <85% SpO2 and >95% SpO2; and number of manual adjustments. Results from a clinical evaluation that compared the performance of 3 closed-loop control algorithms (state machine, proportional-integral-differential, and adaptive model) with manual oxygen therapy on 7 low-birth-weight ventilated preterm babies, are presented. Compared with manual therapy, all closed-loop control algorithms significantly increased the patients' duration in normoxemia and reduced hyperoxemia (P < 0.05). The number of manual adjustments was also significantly reduced by all of the closed-loop control algorithms (P < 0.05). Although the performance of the 3 control algorithms was equivalent, it is suggested that the adaptive model, with its ease of use, may have the best utility.

  2. Computational study of Drucker-Prager plasticity of rock using microtomography

    NASA Astrophysics Data System (ADS)

    Liu, J.; Sarout, J.; Zhang, M.; Dautriat, J.; Veveakis, M.; Regenauer-Lieb, K.

    2016-12-01

    Understanding the physics of rocks is essential for the industry of mining and petroleum. Microtomography provides a new way to quantify the relationship between the microstructure and their mechanical and transport properties. Transport and elastic properties have been studied widely while plastic properties are still poorly understood. In this study, we analyse a synthetic sandstone sample for its up-scaled plastic properties from the micro-scale. The computations are based on the representative volume element (RVE). The mechanical RVE was determined by the upper and lower bound finite element computations of elasticity. By comparing with experimental curves, the parameters of the matrix (solid part), which consists of calcite-cemented quartz grains, were investigated and quite accurate values obtained. Analyses deduced the bulk properties of yield stress, cohesion and the angle of friction of the rock with pores. Computations of a series of models of volume-sizes from 240-cube to 400-cube showed almost overlapped stress-strain curves, suggesting that the mechanical RVE determined by elastic computations is valid for plastic yielding. Furthermore, a series of derivative models were created which have similar structure but different porosity values. The analyses of these models showed that yield stress, cohesion and the angle of friction linearly decrease with the porosity increasing in the range of porosity from 8% to 28%. The angle of friction decreases the fastest and cohesion shows the most stable along with porosity.

  3. Mechanisms of object recognition: what we have learned from pigeons

    PubMed Central

    Soto, Fabian A.; Wasserman, Edward A.

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784

  4. Testing the prospective evaluation of a new healthcare system

    PubMed Central

    Planitz, Birgit; Sanderson, Penelope; Freeman, Clinton; Xiao, Tania; Botea, Adi; Orihuela, Cristina Beltran

    2012-01-01

    Research into health ICT adoption suggests that the failure to understand the clinical workplace has been a major contributing factor to the failure of many computer-based clinical systems. We suggest that clinicians and administrators need methods for envisioning future use when adopting new ICT. This paper presents and evaluates a six-stage “prospective evaluation” model that clinicians can use when assessing the impact of a new electronic patient information system on a Specialist Outpatients Department (SOPD). The prospective evaluation model encompasses normative, descriptive, formative and projective approaches. We show that this combination helped health informaticians to make reasonably accurate predictions for technology adoption at the SOPD. We suggest some refinements, however, to improve the scope and accuracy of predictions. PMID:23304347

  5. Bridge-scour analysis using the water surface profile (WSPRO) model

    USGS Publications Warehouse

    Mueller, David S.; ,

    1993-01-01

    A program was developed to extract hydraulic information required for bridge-scour computations, from the Water-Surface Profile computation model (WSPRO). The program is written in compiled BASIC and is menu driven. Using only ground points, the program can compute average ground elevation, cross-sectional area below a specified datum, or create a Drawing Exchange Format (DXF) fie of cross section. Using both ground points ad hydraulic information form the equal-conveyance tubes computed by WSPRO, the program can compute hydraulic parameters at a user-specified station or in a user-specified subsection of the cross section. The program can identify the maximum velocity in a cross section and the velocity and depth at a user-specified station. The program also can identify the maximum velocity in the cross section and the average velocity, average depth, average ground elevation, width perpendicular to the flow, cross-sectional area of flow, and discharge in a subsection of the cross section. This program does not include any help or suggestions as to what data should be extracted; therefore, the used must understand the scour equations and associated variables to the able to extract the proper information from the WSPRO output.

  6. Dimensional psychiatry: mental disorders as dysfunctions of basic learning mechanisms.

    PubMed

    Heinz, Andreas; Schlagenhauf, Florian; Beck, Anne; Wackerhagen, Carolin

    2016-08-01

    It has been questioned that the more than 300 mental disorders currently listed in international disease classification systems all have a distinct neurobiological correlate. Here, we support the idea that basic dimensions of mental dysfunctions, such as alterations in reinforcement learning, can be identified, which interact with individual vulnerability and psychosocial stress factors and, thus, contribute to syndromes of distress across traditional nosological boundaries. We further suggest that computational modeling of learning behavior can help to identify specific alterations in reinforcement-based decision-making and their associated neurobiological correlates. For example, attribution of salience to drug-related cues associated with dopamine dysfunction in addiction can increase habitual decision-making via promotion of Pavlovian-to-instrumental transfer as indicated by computational modeling of the effect of Pavlovian-conditioned stimuli (here affectively positive or alcohol-related cues) on instrumental approach and avoidance behavior. In schizophrenia, reward prediction errors can be modeled computationally and associated with functional brain activation, thus revealing reduced encoding of such learning signals in the ventral striatum and compensatory activation in the frontal cortex. With respect to negative mood states, it has been shown that both reduced functional activation of the ventral striatum elicited by reward-predicting stimuli and stress-associated activation of the hypothalamic-pituitary-adrenal axis in interaction with reduced serotonin transporter availability and increased amygdala activation by aversive cues contribute to clinical depression; altogether these observations support the notion that basic learning mechanisms, such as Pavlovian and instrumental conditioning and Pavlovian-to-instrumental transfer, represent a basic dimension of mental disorders that can be mechanistically characterized using computational modeling and associated with specific clinical syndromes across established nosological boundaries. Instead of pursuing a narrow focus on single disorders defined by clinical tradition, we suggest that neurobiological research should focus on such basic dimensions, which can be studied in and compared among several mental disorders.

  7. Recurrent V1-V2 interaction in early visual boundary processing.

    PubMed

    Neumann, H; Sepp, W

    1999-11-01

    A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours.

  8. From neuro-functional to neuro-computational models. Comment on "The quartet theory of human emotions: An integrative and neurofunctional model" by S. Koelsch et al.

    NASA Astrophysics Data System (ADS)

    Briesemeister, Benny B.

    2015-06-01

    Historically, there has been a strong opposition between psychological theories of human emotion that suggest a limited number of distinct functional categories, such as anger, fear, happiness and so forth (e.g. [1]), and theories that suggest processing along affective dimensions, such as valence and arousal (e.g. [2]). Only few current models acknowledge that both of these perspectives seem to be legitimate [3], and at their core, even fewer models connect these insights with knowledge about neurophysiology [4]. In this regard, the Quartet Theory of Human Emotions (QTHE) [5] makes a very important and useful contribution to the field of emotion research - but in my opinion, there is still at least one more step to go.

  9. Experimental Design for Stochastic Models of Nonlinear Signaling Pathways Using an Interval-Wise Linear Noise Approximation and State Estimation.

    PubMed

    Zimmer, Christoph

    2016-01-01

    Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models.

  10. Encoding of Natural Sounds at Multiple Spectral and Temporal Resolutions in the Human Auditory Cortex

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Goebel, Rainer; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2014-01-01

    Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex. PMID:24391486

  11. Agent-based dynamic knowledge representation of Pseudomonas aeruginosa virulence activation in the stressed gut: Towards characterizing host-pathogen interactions in gut-derived sepsis.

    PubMed

    Seal, John B; Alverdy, John C; Zaborina, Olga; An, Gary

    2011-09-19

    There is a growing realization that alterations in host-pathogen interactions (HPI) can generate disease phenotypes without pathogen invasion. The gut represents a prime region where such HPI can arise and manifest. Under normal conditions intestinal microbial communities maintain a stable, mutually beneficial ecosystem. However, host stress can lead to changes in environmental conditions that shift the nature of the host-microbe dialogue, resulting in escalation of virulence expression, immune activation and ultimately systemic disease. Effective modulation of these dynamics requires the ability to characterize the complexity of the HPI, and dynamic computational modeling can aid in this task. Agent-based modeling is a computational method that is suited to representing spatially diverse, dynamical systems. We propose that dynamic knowledge representation of gut HPI with agent-based modeling will aid in the investigation of the pathogenesis of gut-derived sepsis. An agent-based model (ABM) of virulence regulation in Pseudomonas aeruginosa was developed by translating bacterial and host cell sense-and-response mechanisms into behavioral rules for computational agents and integrated into a virtual environment representing the host-microbe interface in the gut. The resulting gut milieu ABM (GMABM) was used to: 1) investigate a potential clinically relevant laboratory experimental condition not yet developed--i.e. non-lethal transient segmental intestinal ischemia, 2) examine the sufficiency of existing hypotheses to explain experimental data--i.e. lethality in a model of major surgical insult and stress, and 3) produce behavior to potentially guide future experimental design--i.e. suggested sample points for a potential laboratory model of non-lethal transient intestinal ischemia. Furthermore, hypotheses were generated to explain certain discrepancies between the behaviors of the GMABM and biological experiments, and new investigatory avenues proposed to test those hypotheses. Agent-based modeling can account for the spatio-temporal dynamics of an HPI, and, even when carried out with a relatively high degree of abstraction, can be useful in the investigation of system-level consequences of putative mechanisms operating at the individual agent level. We suggest that an integrated and iterative heuristic relationship between computational modeling and more traditional laboratory and clinical investigations, with a focus on identifying useful and sufficient degrees of abstraction, will enhance the efficiency and translational productivity of biomedical research.

  12. Agent-based dynamic knowledge representation of Pseudomonas aeruginosa virulence activation in the stressed gut: Towards characterizing host-pathogen interactions in gut-derived sepsis

    PubMed Central

    2011-01-01

    Background There is a growing realization that alterations in host-pathogen interactions (HPI) can generate disease phenotypes without pathogen invasion. The gut represents a prime region where such HPI can arise and manifest. Under normal conditions intestinal microbial communities maintain a stable, mutually beneficial ecosystem. However, host stress can lead to changes in environmental conditions that shift the nature of the host-microbe dialogue, resulting in escalation of virulence expression, immune activation and ultimately systemic disease. Effective modulation of these dynamics requires the ability to characterize the complexity of the HPI, and dynamic computational modeling can aid in this task. Agent-based modeling is a computational method that is suited to representing spatially diverse, dynamical systems. We propose that dynamic knowledge representation of gut HPI with agent-based modeling will aid in the investigation of the pathogenesis of gut-derived sepsis. Methodology/Principal Findings An agent-based model (ABM) of virulence regulation in Pseudomonas aeruginosa was developed by translating bacterial and host cell sense-and-response mechanisms into behavioral rules for computational agents and integrated into a virtual environment representing the host-microbe interface in the gut. The resulting gut milieu ABM (GMABM) was used to: 1) investigate a potential clinically relevant laboratory experimental condition not yet developed - i.e. non-lethal transient segmental intestinal ischemia, 2) examine the sufficiency of existing hypotheses to explain experimental data - i.e. lethality in a model of major surgical insult and stress, and 3) produce behavior to potentially guide future experimental design - i.e. suggested sample points for a potential laboratory model of non-lethal transient intestinal ischemia. Furthermore, hypotheses were generated to explain certain discrepancies between the behaviors of the GMABM and biological experiments, and new investigatory avenues proposed to test those hypotheses. Conclusions/Significance Agent-based modeling can account for the spatio-temporal dynamics of an HPI, and, even when carried out with a relatively high degree of abstraction, can be useful in the investigation of system-level consequences of putative mechanisms operating at the individual agent level. We suggest that an integrated and iterative heuristic relationship between computational modeling and more traditional laboratory and clinical investigations, with a focus on identifying useful and sufficient degrees of abstraction, will enhance the efficiency and translational productivity of biomedical research. PMID:21929759

  13. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Lytle, John K. (Technical Monitor)

    2002-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.

  14. Auditory-Motor Interactions in Pediatric Motor Speech Disorders: Neurocomputational Modeling of Disordered Development

    PubMed Central

    Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.

    2014-01-01

    Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630

  15. Rheological Models in the Time-Domain Modeling of Seismic Motion

    NASA Astrophysics Data System (ADS)

    Moczo, P.; Kristek, J.

    2004-12-01

    The time-domain stress-strain relation in a viscoelastic medium has a form of the convolutory integral which is numerically intractable. This was the reason for the oversimplified models of attenuation in the time-domain seismic wave propagation and earthquake motion modeling. In their pioneering work, Day and Minster (1984) showed the way how to convert the integral into numerically tractable differential form in the case of a general viscoelastic modulus. In response to the work by Day and Minster, Emmerich and Korn (1987) suggested using the rheology of their generalized Maxwell body (GMB) while Carcione et al. (1988) suggested using the generalized Zener body (GZB). The viscoelastic moduli of both rheological models have a form of the rational function and thus the differential form of the stress-strain relation is rather easy to obtain. After the papers by Emmerich and Korn and Carcione et al. numerical modelers decided either for the GMB or GZB rheology and developed 'non-communicating' algorithms. In the many following papers the authors using the GMB never commented the GZB rheology and the corresponding algorithms, and the authors using the GZB never related their methods to the GMB rheology and algorithms. We analyze and compare both rheologies and the corresponding incorporations of the realistic attenuation into the time-domain computations. We then focus on the most recent staggered-grid finite-difference modeling, mainly on accounting for the material heterogeneity in the viscoelastic media, and the computational efficiency of the finite-difference algorithms.

  16. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    PubMed Central

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  17. Scenarios for Evolving Seismic Crises: Possible Communication Strategies

    NASA Astrophysics Data System (ADS)

    Steacy, S.

    2015-12-01

    Recent advances in operational earthquake forecasting mean that we are very close to being able to confidently compute changes in earthquake probability as seismic crises develop. For instance, we now have statistical models such as ETAS and STEP which demonstrate considerable skill in forecasting earthquake rates and recent advances in Coulomb based models are also showing much promise. Communicating changes in earthquake probability is likely be very difficult, however, as the absolute probability of a damaging event is likely to remain quite small despite a significant increase in the relative value. Here, we use a hybrid Coulomb/statistical model to compute probability changes for a series of earthquake scenarios in New Zealand. We discuss the strengths and limitations of the forecasts and suggest a number of possible mechanisms that might be used to communicate results in an actual developing seismic crisis.

  18. A common stochastic accumulator with effector-dependent noise can explain eye-hand coordination

    PubMed Central

    Gopal, Atul; Viswanathan, Pooja

    2015-01-01

    The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning. PMID:25568161

  19. Computational models of neuron-astrocyte interactions lead to improved efficacy in the performance of neural networks.

    PubMed

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.

  20. A computational fluid dynamics simulation of a supersonic chemical oxygen-iodine laser

    NASA Astrophysics Data System (ADS)

    Waichman, K.; Rybalkin, V.; Katz, A.; Dahan, Z.; Barmashenko, B. D.; Rosenwaks, S.

    2007-05-01

    The dissociation of I II molecules at the optical axis of a supersonic chemical oxygen-iodine laser (COIL) was studied via detailed measurements and three dimensional computational fluid dynamics calculations. Comparing the measurements and the calculations enabled critical examination of previously proposed dissociation mechanisms and suggestion of a mechanism consistent with the experimental and theoretical results. The gain, I II dissociation fraction and temperature at the optical axis, calculated using Heidner's model (R.F. Heidner III et al., J. Phys. Chem. 87, 2348 (1983)), are much lower than those measured experimentally. Agreement with the experimental results was reached by using Heidner's model supplemented by Azyazov-Heaven's model (V.N. Azyazov and M.C. Heaven, AIAA J. 44, 1593 (2006)) where I II(A') and vibrationally excited O II(a1Δ) are significant dissociation intermediates.

  1. Understanding the sensitivity of nucleation free energies: The role of supersaturation and temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keasler, Samuel J., E-mail: samuel.keasler@vcsu.edu; Department of Science, Valley City State University, 101 College Street SW, Valley City, North Dakota 58072; Siepmann, J. Ilja

    2015-10-28

    Simulations are used to investigate the vapor-to-liquid nucleation of water for several different force fields at various sets of physical conditions. The nucleation free energy barrier is found to be extremely sensitive to the force field at the same absolute conditions. However, when the results are compared at the same supersaturation and reduced temperature or the same metastability parameter and reduced temperature, then the differences in the nucleation free energies of the different models are dramatically reduced. This finding suggests that comparisons of experimental data and computational predictions are most meaningful at the same relative conditions and emphasizes the importancemore » of knowing the phase diagram of a given computational model, but such information is usually not available for models where the interaction energy is determined directly from electronic structure calculations.« less

  2. SoftWAXS: a computational tool for modeling wide-angle X-ray solution scattering from biomolecules.

    PubMed

    Bardhan, Jaydeep; Park, Sanghyun; Makowski, Lee

    2009-10-01

    This paper describes a computational approach to estimating wide-angle X-ray solution scattering (WAXS) from proteins, which has been implemented in a computer program called SoftWAXS. The accuracy and efficiency of SoftWAXS are analyzed for analytically solvable model problems as well as for proteins. Key features of the approach include a numerical procedure for performing the required spherical averaging and explicit representation of the solute-solvent boundary and the surface of the hydration layer. These features allow the Fourier transform of the excluded volume and hydration layer to be computed directly and with high accuracy. This approach will allow future investigation of different treatments of the electron density in the hydration shell. Numerical results illustrate the differences between this approach to modeling the excluded volume and a widely used model that treats the excluded-volume function as a sum of Gaussians representing the individual atomic excluded volumes. Comparison of the results obtained here with those from explicit-solvent molecular dynamics clarifies shortcomings inherent to the representation of solvent as a time-averaged electron-density profile. In addition, an assessment is made of how the calculated scattering patterns depend on input parameters such as the solute-atom radii, the width of the hydration shell and the hydration-layer contrast. These results suggest that obtaining predictive calculations of high-resolution WAXS patterns may require sophisticated treatments of solvent.

  3. Human Environmental Disease Network: A computational model to assess toxicology of contaminants.

    PubMed

    Taboureau, Olivier; Audouze, Karine

    2017-01-01

    During the past decades, many epidemiological, toxicological and biological studies have been performed to assess the role of environmental chemicals as potential toxicants associated with diverse human disorders. However, the relationships between diseases based on chemical exposure rarely have been studied by computational biology. We developed a human environmental disease network (EDN) to explore and suggest novel disease-disease and chemical-disease relationships. The presented scored EDN model is built upon the integration of systems biology and chemical toxicology using information on chemical contaminants and their disease relationships reported in the TDDB database. The resulting human EDN takes into consideration the level of evidence of the toxicant-disease relationships, allowing inclusion of some degrees of significance in the disease-disease associations. Such a network can be used to identify uncharacterized connections between diseases. Examples are discussed for type 2 diabetes (T2D). Additionally, this computational model allows confirmation of already known links between chemicals and diseases (e.g., between bisphenol A and behavioral disorders) and also reveals unexpected associations between chemicals and diseases (e.g., between chlordane and olfactory alteration), thus predicting which chemicals may be risk factors to human health. The proposed human EDN model allows exploration of common biological mechanisms of diseases associated with chemical exposure, helping us to gain insight into disease etiology and comorbidity. This computational approach is an alternative to animal testing supporting the 3R concept.

  4. A new model of sensorimotor coupling in the development of speech.

    PubMed

    Westermann, Gert; Reck Miranda, Eduardo

    2004-05-01

    We present a computational model that learns a coupling between motor parameters and their sensory consequences in vocal production during a babbling phase. Based on the coupling, preferred motor parameters and prototypically perceived sounds develop concurrently. Exposure to an ambient language modifies perception to coincide with the sounds from the language. The model develops motor mirror neurons that are active when an external sound is perceived. An extension to visual mirror neurons for oral gestures is suggested.

  5. Modelling digital thunder

    NASA Astrophysics Data System (ADS)

    Blanco, Francesco; La Rocca, Paola; Petta, Catia; Riggi, Francesco

    2009-01-01

    An educational model simulation of the sound produced by lightning in the sky has been employed to demonstrate realistic signatures of thunder and its connection to the particular structure of the lightning channel. Algorithms used in the past have been revisited and implemented, making use of current computer techniques. The basic properties of the mathematical model, together with typical results and suggestions for additional developments are discussed. The paper is intended as a teaching aid for students and teachers in the context of introductory physics courses at university level.

  6. A Simulation Study of Mutations in the Genetic Regulatory Hierarchy for Butterfly Eyespot Focus Determination

    PubMed Central

    Marcus, Jeffrey M.; Evans, Travis M.

    2008-01-01

    The color patterns on the wings of butterflies have been an important model system in evolutionary developmental biology. A recent computational model tested genetic regulatory hierarchies hypothesized to underlie the formation of butterfly eyespot foci (Evans and Marcus, 2006). The computational model demonstrated that one proposed hierarchy was incapable of reproducing the known patterns of gene expression associated with eyespot focus determination in wild-type butterflies, but that two slightly modified alternative hierarchies were capable of reproducing all of the known gene expressions patterns. Here we extend the computational models previously implemented in Delphi 2.0 to two mutants derived from the squinting bush brown butterfly (Bicyclus anynana). These two mutants, comet and Cyclops, have aberrantly shaped eyespot foci that are produced by different mechanisms. The comet mutation appears to produce a modified interaction between the wing margin and the eyespot focus that results in a series of comet-shaped eyespot foci. The Cyclops mutation causes the failure of wing vein formation between two adjacent wing-cells and the fusion of two adjacent eyespot foci to form a single large elongated focus in their place. The computational approach to modeling pattern formation in these mutants allows us to make predictions about patterns of gene expression, which are largely unstudied in butterfly mutants. It also suggests a critical experiment that will allow us to distinguish between two hypothesized genetic regulatory hierarchies that may underlie all butterfly eyespot foci. PMID:18586070

  7. Retrospective revaluation in sequential decision making: a tale of two systems.

    PubMed

    Gershman, Samuel J; Markman, Arthur B; Otto, A Ross

    2014-02-01

    Recent computational theories of decision making in humans and animals have portrayed 2 systems locked in a battle for control of behavior. One system--variously termed model-free or habitual--favors actions that have previously led to reward, whereas a second--called the model-based or goal-directed system--favors actions that causally lead to reward according to the agent's internal model of the environment. Some evidence suggests that control can be shifted between these systems using neural or behavioral manipulations, but other evidence suggests that the systems are more intertwined than a competitive account would imply. In 4 behavioral experiments, using a retrospective revaluation design and a cognitive load manipulation, we show that human decisions are more consistent with a cooperative architecture in which the model-free system controls behavior, whereas the model-based system trains the model-free system by replaying and simulating experience.

  8. Calculation of thermodynamic functions of aluminum plasma for high-energy-density systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaev, V. V., E-mail: shumaev@student.bmstu.ru

    The results of calculating the degree of ionization, the pressure, and the specific internal energy of aluminum plasma in a wide temperature range are presented. The TERMAG computational code based on the Thomas–Fermi model was used at temperatures T > 105 K, and the ionization equilibrium model (Saha model) was applied at lower temperatures. Quantitatively similar results were obtained in the temperature range where both models are applicable. This suggests that the obtained data may be joined to produce a wide-range equation of state.

  9. The quest for solvable multistate Landau-Zener models

    DOE PAGES

    Sinitsyn, Nikolai A.; Chernyak, Vladimir Y.

    2017-05-24

    Recently, integrability conditions (ICs) in mutistate Landau-Zener (MLZ) theory were proposed. They describe common properties of all known solved systems with linearly time-dependent Hamiltonians. Here we show that ICs enable efficient computer assisted search for new solvable MLZ models that span complexity range from several interacting states to mesoscopic systems with many-body dynamics and combinatorially large phase space. This diversity suggests that nontrivial solvable MLZ models are numerous. Additionally, we refine the formulation of ICs and extend the class of solvable systems to models with points of multiple diabatic level crossing.

  10. A border-ownership model based on computational electromagnetism.

    PubMed

    Zainal, Zaem Arif; Satoh, Shunji

    2018-03-01

    The mathematical relation between a vector electric field and its corresponding scalar potential field is useful to formulate computational problems of lower/middle-order visual processing, specifically related to the assignment of borders to the side of the object: so-called border ownership (BO). BO coding is a key process for extracting the objects from the background, allowing one to organize a cluttered scene. We propose that the problem is solvable simultaneously by application of a theorem of electromagnetism, i.e., "conservative vector fields have zero rotation, or "curl." We hypothesize that (i) the BO signal is definable as a vector electric field with arrowheads pointing to the inner side of perceived objects, and (ii) its corresponding scalar field carries information related to perceived order in depth of occluding/occluded objects. A simple model was developed based on this computational theory. Model results qualitatively agree with object-side selectivity of BO-coding neurons, and with perceptions of object order. The model update rule can be reproduced as a plausible neural network that presents new interpretations of existing physiological results. Results of this study also suggest that T-junction detectors are unnecessary to calculate depth order. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A Test of the Validity of Inviscid Wall-Modeled LES

    NASA Astrophysics Data System (ADS)

    Redman, Andrew; Craft, Kyle; Aikens, Kurt

    2015-11-01

    Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  12. Radar Model of Asteroid 216 Kleopatra

    NASA Technical Reports Server (NTRS)

    2000-01-01

    These images show several views from a radar-based computer model of asteroid 216 Kleopatra. The object, located in the main asteroid belt between Mars and Jupiter, is about 217 kilometers (135 miles) long and about 94 kilometers (58 miles) wide, or about the size of New Jersey.

    This dog bone-shaped asteroid is an apparent leftover from an ancient, violent cosmic collision. Kleopatra is one of several dozen asteroids whose coloring suggests they contain metal.

    A team of astronomers observing Kleopatra used the 305-meter (1,000-foot) telescope of the Arecibo Observatory in Puerto Rico to bounce encoded radio signals off Kleopatra. Using sophisticated computer analysis techniques, they decoded the echoes, transformed them into images, and assembled a computer model of the asteroid's shape.

    The images were obtained when Kleopatra was about 171 million kilometers (106 million miles) from Earth. This model is accurate to within about 15 kilometers (9 miles).

    The Arecibo Observatory is part of the National Astronomy and Ionosphere Center, operated by Cornell University, Ithaca, N.Y., for the National Science Foundation. The Kleopatra radar observations were supported by NASA's Office of Space Science, Washington, DC. JPL is managed for NASA by the California Institute of Technology in Pasadena.

  13. Towards a Computational Framework for Modeling the Impact of Aortic Coarctations Upon Left Ventricular Load

    PubMed Central

    Karabelas, Elias; Gsell, Matthias A. F.; Augustin, Christoph M.; Marx, Laura; Neic, Aurel; Prassl, Anton J.; Goubergrits, Leonid; Kuehne, Titus; Plank, Gernot

    2018-01-01

    Computational fluid dynamics (CFD) models of blood flow in the left ventricle (LV) and aorta are important tools for analyzing the mechanistic links between myocardial deformation and flow patterns. Typically, the use of image-based kinematic CFD models prevails in applications such as predicting the acute response to interventions which alter LV afterload conditions. However, such models are limited in their ability to analyze any impacts upon LV load or key biomarkers known to be implicated in driving remodeling processes as LV function is not accounted for in a mechanistic sense. This study addresses these limitations by reporting on progress made toward a novel electro-mechano-fluidic (EMF) model that represents the entire physics of LV electromechanics (EM) based on first principles. A biophysically detailed finite element (FE) model of LV EM was coupled with a FE-based CFD solver for moving domains using an arbitrary Eulerian-Lagrangian (ALE) formulation. Two clinical cases of patients suffering from aortic coarctations (CoA) were built and parameterized based on clinical data under pre-treatment conditions. For one patient case simulations under post-treatment conditions after geometric repair of CoA by a virtual stenting procedure were compared against pre-treatment results. Numerical stability of the approach was demonstrated by analyzing mesh quality and solver performance under the significantly large deformations of the LV blood pool. Further, computational tractability and compatibility with clinical time scales were investigated by performing strong scaling benchmarks up to 1536 compute cores. The overall cost of the entire workflow for building, fitting and executing EMF simulations was comparable to those reported for image-based kinematic models, suggesting that EMF models show potential of evolving into a viable clinical research tool. PMID:29892227

  14. Modeling choice and reaction time during arbitrary visuomotor learning through the coordination of adaptive working memory and reinforcement learning

    PubMed Central

    Viejo, Guillaume; Khamassi, Mehdi; Brovelli, Andrea; Girard, Benoît

    2015-01-01

    Current learning theory provides a comprehensive description of how humans and other animals learn, and places behavioral flexibility and automaticity at heart of adaptive behaviors. However, the computations supporting the interactions between goal-directed and habitual decision-making systems are still poorly understood. Previous functional magnetic resonance imaging (fMRI) results suggest that the brain hosts complementary computations that may differentially support goal-directed and habitual processes in the form of a dynamical interplay rather than a serial recruitment of strategies. To better elucidate the computations underlying flexible behavior, we develop a dual-system computational model that can predict both performance (i.e., participants' choices) and modulations in reaction times during learning of a stimulus–response association task. The habitual system is modeled with a simple Q-Learning algorithm (QL). For the goal-directed system, we propose a new Bayesian Working Memory (BWM) model that searches for information in the history of previous trials in order to minimize Shannon entropy. We propose a model for QL and BWM coordination such that the expensive memory manipulation is under control of, among others, the level of convergence of the habitual learning. We test the ability of QL or BWM alone to explain human behavior, and compare them with the performance of model combinations, to highlight the need for such combinations to explain behavior. Two of the tested combination models are derived from the literature, and the latter being our new proposal. In conclusion, all subjects were better explained by model combinations, and the majority of them are explained by our new coordination proposal. PMID:26379518

  15. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    NASA Astrophysics Data System (ADS)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.

  16. Computational discovery and in vivo validation of hnf4 as a regulatory gene in planarian regeneration.

    PubMed

    Lobo, Daniel; Morokuma, Junji; Levin, Michael

    2016-09-01

    Automated computational methods can infer dynamic regulatory network models directly from temporal and spatial experimental data, such as genetic perturbations and their resultant morphologies. Recently, a computational method was able to reverse-engineer the first mechanistic model of planarian regeneration that can recapitulate the main anterior-posterior patterning experiments published in the literature. Validating this comprehensive regulatory model via novel experiments that had not yet been performed would add in our understanding of the remarkable regeneration capacity of planarian worms and demonstrate the power of this automated methodology. Using the Michigan Molecular Interactions and STRING databases and the MoCha software tool, we characterized as hnf4 an unknown regulatory gene predicted to exist by the reverse-engineered dynamic model of planarian regeneration. Then, we used the dynamic model to predict the morphological outcomes under different single and multiple knock-downs (RNA interference) of hnf4 and its predicted gene pathway interactors β-catenin and hh Interestingly, the model predicted that RNAi of hnf4 would rescue the abnormal regenerated phenotype (tailless) of RNAi of hh in amputated trunk fragments. Finally, we validated these predictions in vivo by performing the same surgical and genetic experiments with planarian worms, obtaining the same phenotypic outcomes predicted by the reverse-engineered model. These results suggest that hnf4 is a regulatory gene in planarian regeneration, validate the computational predictions of the reverse-engineered dynamic model, and demonstrate the automated methodology for the discovery of novel genes, pathways and experimental phenotypes. michael.levin@tufts.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Stratifying Parkinson's Patients With STN-DBS Into High-Frequency or 60 Hz-Frequency Modulation Using a Computational Model.

    PubMed

    Khojandi, Anahita; Shylo, Oleg; Mannini, Lucia; Kopell, Brian H; Ramdhani, Ritesh A

    2017-07-01

    High frequency stimulation (HFS) of the subthalamic nucleus (STN) is a well-established therapy for Parkinson's disease (PD), particularly the cardinal motor symptoms and levodopa induced motor complications. Recent studies have suggested the possible role of 60 Hz stimulation in STN-deep brain stimulation (DBS) for patients with gait disorder. The objective of this study was to develop a computational model, which stratifies patients a priori based on symptomatology into different frequency settings (i.e., high frequency or 60 Hz). We retrospectively analyzed preoperative MDS-Unified Parkinson's Disease Rating Scale III scores (32 indicators) collected from 20 PD patients implanted with STN-DBS at Mount Sinai Medical Center on either 60 Hz stimulation (ten patients) or HFS (130-185 Hz) (ten patients) for an average of 12 months. Predictive models using the Random Forest classification algorithm were built to associate patient/disease characteristics at surgery to the stimulation frequency. These models were evaluated objectively using leave-one-out cross-validation approach. The computational models produced, stratified patients into 60 Hz or HFS (130-185 Hz) with 95% accuracy. The best models relied on two or three predictors out of the 32 analyzed for classification. Across all predictors, gait and rest tremor of the right hand were consistently the most important. Computational models were developed using preoperative clinical indicators in PD patients treated with STN-DBS. These models were able to accurately stratify PD patients into 60 Hz stimulation or HFS (130-185 Hz) groups a priori, offering a unique potential to enhance the utilization of this therapy based on clinical subtypes. © 2017 International Neuromodulation Society.

  18. A Computational Modeling Approach for Investigating Soft Tissue Balancing in Bicruciate Retaining Knee Arthroplasty

    PubMed Central

    Amiri, Shahram; Wilson, David R.

    2012-01-01

    Bicruciate retaining knee arthroplasty, although has shown improved functions and patient satisfaction compared to other designs of total knee replacement, remains a technically demanding option for treating severe cases of arthritic knees. One of the main challenges in bicruciate retaining arthroplasty is proper balancing of the soft tissue during the surgery. In this study biomechanics of soft tissue balancing was investigated using a validated computational model of the knee joint with high fidelity definitions of the soft tissue structures along with a Taguchi method for design of experiments. The model was used to simulate intraoperative balancing of soft tissue structures following the combinations suggested by an orthogonal array design. The results were used to quantify the corresponding effects on the laxity of the joint under anterior-posterior, internal-external, and varus-valgus loads. These effects were ranked for each ligament bundle to identify the components of laxity which were most sensitive to the corresponding surgical modifications. The resulting map of sensitivity for all the ligament bundles determined the components of laxity most suitable for examination during intraoperative balancing of the soft tissue. Ultimately, a sequence for intraoperative soft tissue balancing was suggested for a bicruciate retaining knee arthroplasty. PMID:23082090

  19. Comparative Evaluation of a Four-Implant-Supported Polyetherketoneketone Framework Prosthesis: A Three-Dimensional Finite Element Analysis Based on Cone Beam Computed Tomography and Computer-Aided Design.

    PubMed

    Lee, Ki-Sun; Shin, Sang-Wan; Lee, Sang-Pyo; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Jeong-Yol

    The purpose of this pilot study was to evaluate and compare polyetherketoneketone (PEKK) with different framework materials for implant-supported prostheses by means of a three-dimensional finite element analysis (3D-FEA) based on cone beam computed tomography (CBCT) and computer-aided design (CAD) data. A geometric model that consisted of four maxillary implants supporting a prosthesis framework was constructed from CBCT and CAD data of a treated patient. Three different materials (zirconia, titanium, and PEKK) were selected, and their material properties were simulated using FEA software in the generated geometric model. In the PEKK framework (ie, low elastic modulus) group, the stress transferred to the implant and simulated adjacent tissue was reduced when compressive stress was dominant, but increased when tensile stress was dominant. This study suggests that the shock-absorbing effects of a resilient implant-supported framework are limited in some areas and that rigid framework material shows a favorable stress distribution and safety of overall components of the prosthesis.

  20. a Study of the Reconstruction of Accidents and Crime Scenes Through Computational Experiments

    NASA Astrophysics Data System (ADS)

    Park, S. J.; Chae, S. W.; Kim, S. H.; Yang, K. M.; Chung, H. S.

    Recently, with an increase in the number of studies of the safety of both pedestrians and passengers, computer software, such as MADYMO, Pam-crash, and LS-dyna, has been providing human models for computer simulation. Although such programs have been applied to make machines beneficial for humans, studies that analyze the reconstruction of accidents or crime scenes are rare. Therefore, through computational experiments, the present study presents reconstructions of two questionable accidents. In the first case, a car fell off the road and the driver was separated from it. The accident investigator was very confused because some circumstantial evidence suggested the possibility that the driver was murdered. In the second case, a woman died in her house and the police suspected foul play with her boyfriend as a suspect. These two cases were reconstructed using the human model in MADYMO software. The first case was eventually confirmed as a traffic accident in which the driver bounced out of the car when the car fell off, and the second case was proved to be suicide rather than homicide.

  1. Rosen's (M,R) system in process algebra.

    PubMed

    Gatherer, Derek; Galpin, Vashti

    2013-11-17

    Robert Rosen's Metabolism-Replacement, or (M,R), system can be represented as a compact network structure with a single source and three products derived from that source in three consecutive reactions. (M,R) has been claimed to be non-reducible to its components and algorithmically non-computable, in the sense of not being evaluable as a function by a Turing machine. If (M,R)-like structures are present in real biological networks, this suggests that many biological networks will be non-computable, with implications for those branches of systems biology that rely on in silico modelling for predictive purposes. We instantiate (M,R) using the process algebra Bio-PEPA, and discuss the extent to which our model represents a true realization of (M,R). We observe that under some starting conditions and parameter values, stable states can be achieved. Although formal demonstration of algorithmic computability remains elusive for (M,R), we discuss the extent to which our Bio-PEPA representation of (M,R) allows us to sidestep Rosen's fundamental objections to computational systems biology. We argue that the behaviour of (M,R) in Bio-PEPA shows life-like properties.

  2. Computer simulation of two-dimensional unsteady flows in estuaries and embayments by the method of characteristics : basic theory and the formulation of the numerical method

    USGS Publications Warehouse

    Lai, Chintu

    1977-01-01

    Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)

  3. Minimal model of a cell connecting amoebic motion and adaptive transport networks.

    PubMed

    Gunji, Yukio-Pegio; Shirakawa, Tomohiro; Niizato, Takayuki; Haruna, Taichi

    2008-08-21

    A cell is a minimal self-sustaining system that can move and compute. Previous work has shown that a unicellular slime mold, Physarum, can be utilized as a biological computer based on cytoplasmic flow encapsulated by a membrane. Although the interplay between the modification of the boundary of a cell and the cytoplasmic flow surrounded by the boundary plays a key role in Physarum computing, no model of a cell has been developed to describe this interplay. Here we propose a toy model of a cell that shows amoebic motion and can solve a maze, Steiner minimum tree problem and a spanning tree problem. Only by assuming that cytoplasm is hardened after passing external matter (or softened part) through a cell, the shape of the cell and the cytoplasmic flow can be changed. Without cytoplasm hardening, a cell is easily destroyed. This suggests that cytoplasmic hardening and/or sol-gel transformation caused by external perturbation can keep a cell in a critical state leading to a wide variety of shapes and motion.

  4. Anticipatory dynamics of biological systems: from molecular quantum states to evolution

    NASA Astrophysics Data System (ADS)

    Igamberdiev, Abir U.

    2015-08-01

    Living systems possess anticipatory behaviour that is based on the flexibility of internal models generated by the system's embedded description. The idea was suggested by Aristotle and is explicitly introduced to theoretical biology by Rosen. The possibility of holding the embedded internal model is grounded in the principle of stable non-equilibrium (Bauer). From the quantum mechanical view, this principle aims to minimize energy dissipation in expense of long relaxation times. The ideas of stable non-equilibrium were developed by Liberman who viewed living systems as subdivided into the quantum regulator and the molecular computer supporting coherence of the regulator's internal quantum state. The computational power of the cell molecular computer is based on the possibility of molecular rearrangements according to molecular addresses. In evolution, the anticipatory strategies are realized both as a precession of phylogenesis by ontogenesis (Berg) and as the anticipatory search of genetic fixation of adaptive changes that incorporates them into the internal model of genetic system. We discuss how the fundamental ideas of anticipation can be introduced into the basic foundations of theoretical biology.

  5. Reconstructing constructivism: causal models, Bayesian learning mechanisms, and the theory theory.

    PubMed

    Gopnik, Alison; Wellman, Henry M

    2012-11-01

    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and nontechnical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and the psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists.

  6. The Perspective Structure of Visual Space

    PubMed Central

    2015-01-01

    Luneburg’s model has been the reference for experimental studies of visual space for almost seventy years. His claim for a curved visual space has been a source of inspiration for visual scientists as well as philosophers. The conclusion of many experimental studies has been that Luneburg’s model does not describe visual space in various tasks and conditions. Remarkably, no alternative model has been suggested. The current study explores perspective transformations of Euclidean space as a model for visual space. Computations show that the geometry of perspective spaces is considerably different from that of Euclidean space. Collinearity but not parallelism is preserved in perspective space and angles are not invariant under translation and rotation. Similar relationships have shown to be properties of visual space. Alley experiments performed early in the nineteenth century have been instrumental in hypothesizing curved visual spaces. Alleys were computed in perspective space and compared with reconstructed alleys of Blumenfeld. Parallel alleys were accurately described by perspective geometry. Accurate distance alleys were derived from parallel alleys by adjusting the interstimulus distances according to the size-distance invariance hypothesis. Agreement between computed and experimental alleys and accommodation of experimental results that rejected Luneburg’s model show that perspective space is an appropriate model for how we perceive orientations and angles. The model is also appropriate for perceived distance ratios between stimuli but fails to predict perceived distances. PMID:27648222

  7. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  8. Requirements and principles for the implementation and construction of large-scale geographic information systems

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.

    1987-01-01

    This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.

  9. Discrete particle model for cement infiltration within open-cell structures: Prevention of osteoporotic fracture.

    PubMed

    Ramos-Infante, Samuel Jesús; Ten-Esteve, Amadeo; Alberich-Bayarri, Angel; Pérez, María Angeles

    2018-01-01

    This paper proposes a discrete particle model based on the random-walk theory for simulating cement infiltration within open-cell structures to prevent osteoporotic proximal femur fractures. Model parameters consider the cement viscosity (high and low) and the desired direction of injection (vertical and diagonal). In vitro and in silico characterizations of augmented open-cell structures validated the computational model and quantified the improved mechanical properties (Young's modulus) of the augmented specimens. The cement injection pattern was successfully predicted in all the simulated cases. All the augmented specimens exhibited enhanced mechanical properties computationally and experimentally (maximum improvements of 237.95 ± 12.91% and 246.85 ± 35.57%, respectively). The open-cell structures with high porosity fraction showed a considerable increase in mechanical properties. Cement augmentation in low porosity fraction specimens resulted in a lesser increase in mechanical properties. The results suggest that the proposed discrete particle model is adequate for use as a femoroplasty planning framework.

  10. Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience

    PubMed Central

    Kriegeskorte, Nikolaus; Mur, Marieke; Bandettini, Peter

    2008-01-01

    A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g., single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement (e.g., fMRI and invasive or scalp electrophysiology), and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices (RDMs), which characterize the information carried by a given representation in a brain or model. Building on a rich psychological and mathematical literature on similarity analysis, we propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs. We demonstrate RSA by relating representations of visual objects as measured with fMRI in early visual cortex and the fusiform face area to computational models spanning a wide range of complexities. The RDMs are simultaneously related via second-level application of multidimensional scaling and tested using randomization and bootstrap techniques. We discuss the broad potential of RSA, including novel approaches to experimental design, and argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience. PMID:19104670

  11. Theoretical modeling of magnesium ion imprints in the Raman scattering of water.

    PubMed

    Kapitán, Josef; Dracínský, Martin; Kaminský, Jakub; Benda, Ladislav; Bour, Petr

    2010-03-18

    Hydration envelopes of metallic ions significantly influence their chemical properties and biological functioning. Previous computational studies, nuclear magnetic resonance (NMR), and vibrational spectra indicated a strong affinity of the Mg(2+) cation to water. We find it interesting that, although monatomic ions do not vibrate themselves, they cause notable changes in the water Raman signal. Therefore, in this study, we used a combination of Raman spectroscopy and computer modeling to analyze the magnesium hydration shell and origin of the signal. In the measured spectra of several salts (LiCl, NaCl, KCl, MgCl(2), CaCl(2), MgBr(2), and MgI(2) water solutions), only the spectroscopic imprint of the hydrated Mg(2+) cation could clearly be identified as an exceptionally distinct peak at approximately 355 cm(-1). The assignment of this band to the Mg-O stretching motion could be confirmed on the basis of several models involving quantum chemical computations on metal/water clusters. Minor Raman spectral features could also be explained. Ab initio and Fourier transform (FT) techniques coupled with the Car-Parrinello molecular dynamics were adapted to provide the spectra from dynamical trajectories. The results suggest that even in concentrated solutions magnesium preferentially forms a [Mg(H(2)O)(6)](2+) complex of a nearly octahedral symmetry; nevertheless, the Raman signal is primarily associated with the relatively strong metal-H(2)O bond. Partially covalent character of the Mg-O bond was confirmed by a natural bond orbital analysis. Computations on hydrated chlorine anion did not provide a specific signal. The FT techniques gave good spectral profiles in the high-frequency region, whereas the lowest-wavenumber vibrations were better reproduced by the cluster models. Both dynamical and cluster computational models provided a useful link between spectral shapes and specific ion-water interactions.

  12. Toying with Technology.

    ERIC Educational Resources Information Center

    Foster, Patrick; Kirkwood, James

    1993-01-01

    Suggests that technology education is much more than simply computer literacy and must emphasize real-world problem solving and hands-on learning. Provides examples of activities, such as the construction of a model city out of scrap wood, that can be carried out with students in grades one through four to develop problem-solving skills. (MDM)

  13. Web-Based Training in Corporations: Organizational Considerations

    ERIC Educational Resources Information Center

    Chamers, Terri; Lee, Doris

    2004-01-01

    Advances in technology offer the possibility of new methods for delivering instruction. Learning via the Internet is being heralded by many as the new pedagogical model for training. Recent issues of training, computer, and management magazines all suggest that web-based training (WBT) is the best way to reach geographically dispersed employees…

  14. Introductory Programming Subject in European Higher Education

    ERIC Educational Resources Information Center

    Aleksic, Veljko; Ivanovic, Mirjana

    2016-01-01

    Programming is one of the basic subjects in most informatics, computer science mathematics and technical faculties' curricula. Integrated overview of the models for teaching programming, problems in teaching and suggested solutions were presented in this paper. Research covered current state of 1019 programming subjects in 715 study programmes at…

  15. Making a Connection between Computational Modeling and Educational Research.

    ERIC Educational Resources Information Center

    Carbonaro, Michael

    2003-01-01

    Bruner, Goodnow, and Austin's (1956) research on concept development is reexamined from a connectionist perspective. A neural network was constructed which associates positive and negative instances of a concept with corresponding attribute values. Results suggest the simultaneous learning of attributes guided the network in constructing a faster…

  16. Pore-Scale Determination of Gas Relative Permeability in Hydrate-Bearing Sediments Using X-Ray Computed Micro-Tomography and Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiongyu; Verma, Rahul; Espinoza, D. Nicolas; Prodanović, Maša.

    2018-01-01

    This work uses X-ray computed micro-tomography (μCT) to monitor xenon hydrate growth in a sandpack under the excess gas condition. The μCT images give pore-scale hydrate distribution and pore habit in space and time. We use the lattice Boltzmann method to calculate gas relative permeability (krg) as a function of hydrate saturation (Shyd) in the pore structure of the experimental hydrate-bearing sand retrieved from μCT data. The results suggest the krg - Shyd data fit well a new model krg = (1-Shyd)·exp(-4.95·Shyd) rather than the simple Corey model. In addition, we calculate krg-Shyd curves using digital models of hydrate-bearing sand based on idealized grain-attaching, coarse pore-filling, and dispersed pore-filling hydrate habits. Our pore-scale measurements and modeling show that the krg-Shyd curves are similar regardless of whether hydrate crystals develop grain-attaching or coarse pore-filling habits. The dispersed pore filling habit exhibits much lower gas relative permeability than the other two, but it is not observed in the experiment and not compatible with Ostwald ripening mechanisms. We find that a single grain-shape factor can be used in the Carman-Kozeny equation to calculate krg-Shyd data with known porosity and average grain diameter, suggesting it is a useful model for hydrate-bearing sand.

  17. Reconfiguration of the pontomedullary respiratory network: a computational modeling study with coordinated in vivo experiments.

    PubMed

    Rybak, I A; O'Connor, R; Ross, A; Shevtsova, N A; Nuding, S C; Segers, L S; Shannon, R; Dick, T E; Dunin-Barkowski, W L; Orem, J M; Solomon, I C; Morris, K F; Lindsey, B G

    2008-10-01

    A large body of data suggests that the pontine respiratory group (PRG) is involved in respiratory phase-switching and the reconfiguration of the brain stem respiratory network. However, connectivity between the PRG and ventral respiratory column (VRC) in computational models has been largely ad hoc. We developed a network model with PRG-VRC connectivity inferred from coordinated in vivo experiments. Neurons were modeled in the "integrate-and-fire" style; some neurons had pacemaker properties derived from the model of Breen et al. We recapitulated earlier modeling results, including reproduction of activity profiles of different respiratory neurons and motor outputs, and their changes under different conditions (vagotomy, pontine lesions, etc.). The model also reproduced characteristic changes in neuronal and motor patterns observed in vivo during fictive cough and during hypoxia in non-rapid eye movement sleep. Our simulations suggested possible mechanisms for respiratory pattern reorganization during these behaviors. The model predicted that network- and pacemaker-generated rhythms could be co-expressed during the transition from gasping to eupnea, producing a combined "burst-ramp" pattern of phrenic discharges. To test this prediction, phrenic activity and multiple single neuron spike trains were monitored in vagotomized, decerebrate, immobilized, thoracotomized, and artificially ventilated cats during hypoxia and recovery. In most experiments, phrenic discharge patterns during recovery from hypoxia were similar to those predicted by the model. We conclude that under certain conditions, e.g., during recovery from severe brain hypoxia, components of a distributed network activity present during eupnea can be co-expressed with gasp patterns generated by a distinct, functionally "simplified" mechanism.

  18. Complex Instruction Set Quantum Computing

    NASA Astrophysics Data System (ADS)

    Sanders, G. D.; Kim, K. W.; Holton, W. C.

    1998-03-01

    In proposed quantum computers, electromagnetic pulses are used to implement logic gates on quantum bits (qubits). Gates are unitary transformations applied to coherent qubit wavefunctions and a universal computer can be created using a minimal set of gates. By applying many elementary gates in sequence, desired quantum computations can be performed. This reduced instruction set approach to quantum computing (RISC QC) is characterized by serial application of a few basic pulse shapes and a long coherence time. However, the unitary matrix of the overall computation is ultimately a unitary matrix of the same size as any of the elementary matrices. This suggests that we might replace a sequence of reduced instructions with a single complex instruction using an optimally taylored pulse. We refer to this approach as complex instruction set quantum computing (CISC QC). One trades the requirement for long coherence times for the ability to design and generate potentially more complex pulses. We consider a model system of coupled qubits interacting through nearest neighbor coupling and show that CISC QC can reduce the time required to perform quantum computations.

  19. Development of a Computational Chemical Vapor Deposition Model: Applications to Indium Nitride and Dicyanovinylaniline

    NASA Technical Reports Server (NTRS)

    Cardelino, Carlos

    1999-01-01

    A computational chemical vapor deposition (CVD) model is presented, that couples chemical reaction mechanisms with fluid dynamic simulations for vapor deposition experiments. The chemical properties of the systems under investigation are evaluated using quantum, molecular and statistical mechanics models. The fluid dynamic computations are performed using the CFD-ACE program, which can simulate multispecies transport, heat and mass transfer, gas phase chemistry, chemistry of adsorbed species, pulsed reactant flow and variable gravity conditions. Two experimental setups are being studied, in order to fabricate films of: (a) indium nitride (InN) from the gas or surface phase reaction of trimethylindium and ammonia; and (b) 4-(1,1)dicyanovinyl-dimethylaminoaniline (DCVA) by vapor deposition. Modeling of these setups requires knowledge of three groups of properties: thermodynamic properties (heat capacity), transport properties (diffusion, viscosity, and thermal conductivity), and kinetic properties (rate constants for all possible elementary chemical reactions). These properties are evaluated using computational methods whenever experimental data is not available for the species or for the elementary reactions. The chemical vapor deposition model is applied to InN and DCVA. Several possible InN mechanisms are proposed and analyzed. The CVD model simulations of InN show that the deposition rate of InN is more efficient when pulsing chemistry is used under conditions of high pressure and microgravity. An analysis of the chemical properties of DCVA show that DCVA dimers may form under certain conditions of physical vapor transport. CVD simulations of the DCVA system suggest that deposition of the DCVA dimer may play a small role in the film and crystal growth processes.

  20. Computational Fluid Dynamics of Choanoflagellate Filter-Feeding

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Seyed Saeed; Walther, Jens; Nielsen, Lasse Tore; Kiorboe, Thomas; Dolger, Julia; Andersen, Anders

    2017-11-01

    Choanoflagellates are unicellular aquatic organisms with a single flagellum that drives a feeding current through a funnel-shaped collar filter on which bacteria-sized prey are caught. Using computational fluid dynamics (CFD) we model the beating flagellum and the complex filter flow of the choanoflagellate Diaphanoeca grandis. Our CFD simulations based on the current understanding of the morphology underestimate the experimentally observed clearance rate by more than an order of magnitude: The beating flagellum is simply unable to draw enough water through the fine filter. Our observations motivate us to suggest a radically different filtration mechanism that requires a flagellar vane (sheet), and addition of a wide vane in our CFD model allows us to correctly predict the observed clearance rate.

  1. An Integrative Account of Constraints on Cross-Situational Learning

    PubMed Central

    Yurovsky, Daniel; Frank, Michael C.

    2015-01-01

    Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052

  2. Attractor Dynamics and Semantic Neighborhood Density: Processing Is Slowed by Near Neighbors and Speeded by Distant Neighbors

    PubMed Central

    Mirman, Daniel; Magnuson, James S.

    2008-01-01

    The authors investigated semantic neighborhood density effects on visual word processing to examine the dynamics of activation and competition among semantic representations. Experiment 1 validated feature-based semantic representations as a basis for computing semantic neighborhood density and suggested that near and distant neighbors have opposite effects on word processing. Experiment 2 confirmed these results: Word processing was slower for dense near neighborhoods and faster for dense distant neighborhoods. Analysis of a computational model showed that attractor dynamics can produce this pattern of neighborhood effects. The authors argue for reconsideration of traditional models of neighborhood effects in terms of attractor dynamics, which allow both inhibitory and facilitative effects to emerge. PMID:18194055

  3. Anomalous transport and holographic momentum relaxation

    NASA Astrophysics Data System (ADS)

    Copetti, Christian; Fernández-Pendás, Jorge; Landsteiner, Karl; Megías, Eugenio

    2017-09-01

    The chiral magnetic and vortical effects denote the generation of dissipationless currents due to magnetic fields or rotation. They can be studied in holographic models with Chern-Simons couplings dual to anomalies in field theory. We study a holographic model with translation symmetry breaking based on linear massless scalar field backgrounds. We compute the electric DC conductivity and find that it can vanish for certain values of the translation symmetry breaking couplings. Then we compute the chiral magnetic and chiral vortical conductivities. They are completely independent of the holographic disorder couplings and take the usual values in terms of chemical potential and temperature. To arrive at this result we suggest a new definition of energy-momentum tensor in presence of the gravitational Chern-Simons coupling.

  4. Computational modeling identifies key gene regulatory interactions underlying phenobarbital-mediated tumor promotion

    PubMed Central

    Luisier, Raphaëlle; Unterberger, Elif B.; Goodman, Jay I.; Schwarz, Michael; Moggs, Jonathan; Terranova, Rémi; van Nimwegen, Erik

    2014-01-01

    Gene regulatory interactions underlying the early stages of non-genotoxic carcinogenesis are poorly understood. Here, we have identified key candidate regulators of phenobarbital (PB)-mediated mouse liver tumorigenesis, a well-characterized model of non-genotoxic carcinogenesis, by applying a new computational modeling approach to a comprehensive collection of in vivo gene expression studies. We have combined our previously developed motif activity response analysis (MARA), which models gene expression patterns in terms of computationally predicted transcription factor binding sites with singular value decomposition (SVD) of the inferred motif activities, to disentangle the roles that different transcriptional regulators play in specific biological pathways of tumor promotion. Furthermore, transgenic mouse models enabled us to identify which of these regulatory activities was downstream of constitutive androstane receptor and β-catenin signaling, both crucial components of PB-mediated liver tumorigenesis. We propose novel roles for E2F and ZFP161 in PB-mediated hepatocyte proliferation and suggest that PB-mediated suppression of ESR1 activity contributes to the development of a tumor-prone environment. Our study shows that combining MARA with SVD allows for automated identification of independent transcription regulatory programs within a complex in vivo tissue environment and provides novel mechanistic insights into PB-mediated hepatocarcinogenesis. PMID:24464994

  5. Cone beam computed tomography of plastinated hearts for instruction of radiological anatomy.

    PubMed

    Chang, Chih-Wei; Atkinson, Gregory; Gandhi, Niket; Farrell, Michael L; Labrash, Steven; Smith, Alice B; Norton, Neil S; Matsui, Takashi; Lozanoff, Scott

    2016-09-01

    Radiological anatomy education is an important aspect of the medical curriculum. The purpose of this study was to establish and demonstrate the use of plastinated anatomical specimens, specifically human hearts, for use in radiological anatomy education. Four human hearts were processed with routine plastination procedures at room temperature. Specimens were subjected to cone beam computed tomography and a graphics program (ER3D) was applied to generate 3D cardiac models. A comparison was conducted between plastinated hearts and their corresponding computer models based on a list of morphological cardiac features commonly studied in the gross anatomy laboratory. Results showed significant correspondence between plastinations and CBCT-generated 3D models (98 %; p < .01) for external structures and 100 % for internal cardiac features, while 85 % correspondence was achieved between plastinations and 2D CBCT slices. Complete correspondence (100 %) was achieved between key observations on the plastinations and internal radiological findings typically required of medical student. All pathologic features seen on the plastinated hearts were also visualized internally with the CBCT-generated models and 2D slices. These results suggest that CBCT-derived slices and models can be successfully generated from plastinated material and provide accurate representations for radiological anatomy education.

  6. Multi-phase models for water and thermal management of proton exchange membrane fuel cell: A review

    NASA Astrophysics Data System (ADS)

    Zhang, Guobin; Jiao, Kui

    2018-07-01

    The 3D (three-dimensional) multi-phase CFD (computational fluid dynamics) model is widely utilized in optimizing water and thermal management of PEM (proton exchange membrane) fuel cell. However, a satisfactory 3D multi-phase CFD model which is able to simulate the detailed gas and liquid two-phase flow in channels and reflect its effect on performance precisely is still not developed due to the coupling difficulties and computation amount. Meanwhile, the agglomerate model of CL (catalyst layer) should also be added in 3D CFD model so as to better reflect the concentration loss and optimize CL structure in macroscopic scale. Besides, the effect of thermal management is perhaps underestimated in current 3D multi-phase CFD simulations due to the lack of coolant channel in computation domain and constant temperature boundary condition. Therefore, the 3D CFD simulations in cell and stack levels with convection boundary condition are suggested to simulate the water and thermal management more accurately. Nevertheless, with the rapid development of PEM fuel cell, current 3D CFD simulations are far from practical demand, especially at high current density and low to zero humidity and for the novel designs developed recently, such as: metal foam flow field, 3D fine mesh flow field, anode circulation etc.

  7. A computational feedforward model predicts categorization of masked emotional body language for longer, but not for shorter, latencies.

    PubMed

    Stienen, Bernard M C; Schindler, Konrad; de Gelder, Beatrice

    2012-07-01

    Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.

  8. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  9. The demand for consumer health information.

    PubMed

    Wagner, T H; Hu, T W; Hibbard, J H

    2001-11-01

    Using data from an evaluation of a community-wide informational intervention, we modeled the demand for medical reference books, telephone advice nurses, and computers for health information. Data were gathered from random household surveys in Boise, ID (experimental site), Billings, MT, and Eugene, OR (control sites). Conditional difference-in-differences show that the intervention increased the use of medical reference books, advice nurses, and computers for health information by approximately 15, 6, and 4%. respectively. The results also suggest that the intervention was associated with a decreased reliance on health professionals for information.

  10. A Structured-Inquiry Approach to Teaching Neurophysiology Using Computer Simulation

    PubMed Central

    Crisp, Kevin M.

    2012-01-01

    Computer simulation is a valuable tool for teaching the fundamentals of neurophysiology in undergraduate laboratories where time and equipment limitations restrict the amount of course content that can be delivered through hands-on interaction. However, students often find such exercises to be tedious and unstimulating. In an effort to engage students in the use of computational modeling while developing a deeper understanding of neurophysiology, an attempt was made to use an educational neurosimulation environment as the basis for a novel, inquiry-based research project. During the semester, students in the class wrote a research proposal, used the Neurodynamix II simulator to generate a large data set, analyzed their modeling results statistically, and presented their findings at the Midbrains Neuroscience Consortium undergraduate poster session. Learning was assessed in the form of a series of short term papers and two 10-min in-class writing responses to the open-ended question, “How do ion channels influence neuronal firing?”, which they completed on weeks 6 and 15 of the semester. Students’ answers to this question showed a deeper understanding of neuronal excitability after the project; their term papers revealed evidence of critical thinking about computational modeling and neuronal excitability. Suggestions for the adaptation of this structured-inquiry approach into shorter term lab experiences are discussed. PMID:23494064

  11. Fractional Steps methods for transient problems on commodity computer architectures

    NASA Astrophysics Data System (ADS)

    Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.

    2008-12-01

    Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.

  12. Using Hybrid Techniques for Generating Watershed-scale Flood Models in an Integrated Modeling Framework

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Merwade, V.; Singhofen, P.

    2017-12-01

    There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.

  13. Prediction of ball and roller bearing thermal and kinematic performance by computer analysis

    NASA Technical Reports Server (NTRS)

    Pirvics, J.; Kleckner, R. J.

    1983-01-01

    Characteristics of good computerized analysis software are suggested. These general remarks and an overview of representative software precede a more detailed discussion of load support system analysis program structure. Particular attention is directed at a recent cylindrical roller bearing analysis as an example of the available design tools. Selected software modules are then examined to reveal the detail inherent in contemporary analysis. This leads to a brief section on current design computation which seeks to suggest when and why computerized analysis is warranted. An example concludes the argument offered for such design methodology. Finally, remarks are made concerning needs for model development to address effects which are now considered to be secondary but are anticipated to emerge to primary status in the near future.

  14. Electromagnetic Modelling of MMIC CPWs for High Frequency Applications

    NASA Astrophysics Data System (ADS)

    Sinulingga, E. P.; Kyabaggu, P. B. K.; Rezazadeh, A. A.

    2018-02-01

    Realising the theoretical electrical characteristics of components through modelling can be carried out using computer-aided design (CAD) simulation tools. If the simulation model provides the expected characteristics, the fabrication process of Monolithic Microwave Integrated Circuit (MMIC) can be performed for experimental verification purposes. Therefore improvements can be suggested before mass fabrication takes place. This research concentrates on development of MMIC technology by providing accurate predictions of the characteristics of MMIC components using an improved Electromagnetic (EM) modelling technique. The knowledge acquired from the modelling and characterisation process in this work can be adopted by circuit designers for various high frequency applications.

  15. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  16. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  17. Using field inversion to quantify functional errors in turbulence closures

    NASA Astrophysics Data System (ADS)

    Singh, Anand Pratap; Duraisamy, Karthik

    2016-04-01

    A data-informed approach is presented with the objective of quantifying errors and uncertainties in the functional forms of turbulence closure models. The approach creates modeling information from higher-fidelity simulations and experimental data. Specifically, a Bayesian formalism is adopted to infer discrepancies in the source terms of transport equations. A key enabling idea is the transformation of the functional inversion procedure (which is inherently infinite-dimensional) into a finite-dimensional problem in which the distribution of the unknown function is estimated at discrete mesh locations in the computational domain. This allows for the use of an efficient adjoint-driven inversion procedure. The output of the inversion is a full-field of discrepancy that provides hitherto inaccessible modeling information. The utility of the approach is demonstrated by applying it to a number of problems including channel flow, shock-boundary layer interactions, and flows with curvature and separation. In all these cases, the posterior model correlates well with the data. Furthermore, it is shown that even if limited data (such as surface pressures) are used, the accuracy of the inferred solution is improved over the entire computational domain. The results suggest that, by directly addressing the connection between physical data and model discrepancies, the field inversion approach materially enhances the value of computational and experimental data for model improvement. The resulting information can be used by the modeler as a guiding tool to design more accurate model forms, or serve as input to machine learning algorithms to directly replace deficient modeling terms.

  18. An improved shuffled frog leaping algorithm based evolutionary framework for currency exchange rate prediction

    NASA Astrophysics Data System (ADS)

    Dash, Rajashree

    2017-11-01

    Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.

  19. The effects of geometric uncertainties on computational modelling of knee biomechanics

    PubMed Central

    Fisher, John; Wilcox, Ruth

    2017-01-01

    The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models. PMID:28879008

  20. Computational Pathology to Discriminate Benign from Malignant Intraductal Proliferations of the Breast

    PubMed Central

    Oh, Eun-Yeong; Lerwill, Melinda F.; Brachtel, Elena F.; Jones, Nicholas C.; Knoblauch, Nicholas W.; Montaser-Kouhsari, Laleh; Johnson, Nicole B.; Rao, Luigi K. F.; Faulkner-Jones, Beverly; Wilbur, David C.; Schnitt, Stuart J.; Beck, Andrew H.

    2014-01-01

    The categorization of intraductal proliferative lesions of the breast based on routine light microscopic examination of histopathologic sections is in many cases challenging, even for experienced pathologists. The development of computational tools to aid pathologists in the characterization of these lesions would have great diagnostic and clinical value. As a first step to address this issue, we evaluated the ability of computational image analysis to accurately classify DCIS and UDH and to stratify nuclear grade within DCIS. Using 116 breast biopsies diagnosed as DCIS or UDH from the Massachusetts General Hospital (MGH), we developed a computational method to extract 392 features corresponding to the mean and standard deviation in nuclear size and shape, intensity, and texture across 8 color channels. We used L1-regularized logistic regression to build classification models to discriminate DCIS from UDH. The top-performing model contained 22 active features and achieved an AUC of 0.95 in cross-validation on the MGH data-set. We applied this model to an external validation set of 51 breast biopsies diagnosed as DCIS or UDH from the Beth Israel Deaconess Medical Center, and the model achieved an AUC of 0.86. The top-performing model contained active features from all color-spaces and from the three classes of features (morphology, intensity, and texture), suggesting the value of each for prediction. We built models to stratify grade within DCIS and obtained strong performance for stratifying low nuclear grade vs. high nuclear grade DCIS (AUC = 0.98 in cross-validation) with only moderate performance for discriminating low nuclear grade vs. intermediate nuclear grade and intermediate nuclear grade vs. high nuclear grade DCIS (AUC = 0.83 and 0.69, respectively). These data show that computational pathology models can robustly discriminate benign from malignant intraductal proliferative lesions of the breast and may aid pathologists in the diagnosis and classification of these lesions. PMID:25490766

  1. A new computational growth model for sea urchin skeletons.

    PubMed

    Zachos, Louis G

    2009-08-07

    A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.

  2. Computational Modeling of Blood Flow in the TrapEase Inferior Vena Cava Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, M A; Henshaw, W D; Wang, S L

    To evaluate the flow hemodynamics of the TrapEase vena cava filter using three dimensional computational fluid dynamics, including simulated thrombi of multiple shapes, sizes, and trapping positions. The study was performed to identify potential areas of recirculation and stagnation and areas in which trapped thrombi may influence intrafilter thrombosis. Computer models of the TrapEase filter, thrombi (volumes ranging from 0.25mL to 2mL, 3 different shapes), and a 23mm diameter cava were constructed. The hemodynamics of steady-state flow at Reynolds number 600 was examined for the unoccluded and partially occluded filter. Axial velocity contours and wall shear stresses were computed. Flowmore » in the unoccluded TrapEase filter experienced minimal disruption, except near the superior and inferior tips where low velocity flow was observed. For spherical thrombi in the superior trapping position, stagnant and recirculating flow was observed downstream of the thrombus; the volume of stagnant flow and the peak wall shear stress increased monotonically with thrombus volume. For inferiorly trapped spherical thrombi, marked disruption to the flow was observed along the cava wall ipsilateral to the thrombus and in the interior of the filter. Spherically shaped thrombus produced a lower peak wall shear stress than conically shaped thrombus and a larger peak stress than ellipsoidal thrombus. We have designed and constructed a computer model of the flow hemodynamics of the TrapEase IVC filter with varying shapes, sizes, and positions of thrombi. The computer model offers several advantages over in vitro techniques including: improved resolution, ease of evaluating different thrombus sizes and shapes, and easy adaptation for new filter designs and flow parameters. Results from the model also support a previously reported finding from photochromic experiments that suggest the inferior trapping position of the TrapEase IVC filter leads to an intra-filter region of recirculating/stagnant flow with very low shear stress that may be thrombogenic.« less

  3. Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination

    PubMed Central

    Jana, Sumitash; Gopal, Atul

    2016-01-01

    Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements. PMID:27784809

  4. Computational assessment of hemodynamics-based diagnostic tools using a database of virtual subjects: Application to three case studies.

    PubMed

    Willemet, Marie; Vennin, Samuel; Alastruey, Jordi

    2016-12-08

    Many physiological indexes and algorithms based on pulse wave analysis have been suggested in order to better assess cardiovascular function. Because these tools are often computed from in-vivo hemodynamic measurements, their validation is time-consuming, challenging, and biased by measurement errors. Recently, a new methodology has been suggested to assess theoretically these computed tools: a database of virtual subjects generated using numerical 1D-0D modeling of arterial hemodynamics. The generated set of simulations encloses a wide selection of healthy cases that could be encountered in a clinical study. We applied this new methodology to three different case studies that demonstrate the potential of our new tool, and illustrated each of them with a clinically relevant example: (i) we assessed the accuracy of indexes estimating pulse wave velocity; (ii) we validated and refined an algorithm that computes central blood pressure; and (iii) we investigated theoretical mechanisms behind the augmentation index. Our database of virtual subjects is a new tool to assist the clinician: it provides insight into the physical mechanisms underlying the correlations observed in clinical practice. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  6. Fast nonlinear gravity inversion in spherical coordinates with application to the South American Moho

    NASA Astrophysics Data System (ADS)

    Uieda, Leonardo; Barbosa, Valéria C. F.

    2017-01-01

    Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.

  7. Assessment of uncertainty in the numerical simulation of solar irradiance over inclined PV panels: New algorithms using measurements and modeling tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Yu; Sengupta, Manajit; Dooraghi, Mike

    Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less

  8. Assessment of uncertainty in the numerical simulation of solar irradiance over inclined PV panels: New algorithms using measurements and modeling tools

    DOE PAGES

    Xie, Yu; Sengupta, Manajit; Dooraghi, Mike

    2018-03-20

    Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less

  9. Facilitating higher-fidelity simulations of axial compressor instability and other turbomachinery flow conditions

    NASA Astrophysics Data System (ADS)

    Herrick, Gregory Paul

    The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.

  10. Heuristics as Bayesian inference under extreme priors.

    PubMed

    Parpart, Paula; Jones, Matt; Love, Bradley C

    2018-05-01

    Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Integration of computational modeling with membrane transport studies reveals new insights into amino acid exchange transport mechanisms

    PubMed Central

    Widdows, Kate L.; Panitchob, Nuttanont; Crocker, Ian P.; Please, Colin P.; Hanson, Mark A.; Sibley, Colin P.; Johnstone, Edward D.; Sengers, Bram G.; Lewis, Rohan M.; Glazier, Jocelyn D.

    2015-01-01

    Uptake of system L amino acid substrates into isolated placental plasma membrane vesicles in the absence of opposing side amino acid (zero-trans uptake) is incompatible with the concept of obligatory exchange, where influx of amino acid is coupled to efflux. We therefore hypothesized that system L amino acid exchange transporters are not fully obligatory and/or that amino acids are initially present inside the vesicles. To address this, we combined computational modeling with vesicle transport assays and transporter localization studies to investigate the mechanisms mediating [14C]l-serine (a system L substrate) transport into human placental microvillous plasma membrane (MVM) vesicles. The carrier model provided a quantitative framework to test the 2 hypotheses that l-serine transport occurs by either obligate exchange or nonobligate exchange coupled with facilitated transport (mixed transport model). The computational model could only account for experimental [14C]l-serine uptake data when the transporter was not exclusively in exchange mode, best described by the mixed transport model. MVM vesicle isolates contained endogenous amino acids allowing for potential contribution to zero-trans uptake. Both L-type amino acid transporter (LAT)1 and LAT2 subtypes of system L were distributed to MVM, with l-serine transport attributed to LAT2. These findings suggest that exchange transporters do not function exclusively as obligate exchangers.—Widdows, K. L., Panitchob, N., Crocker, I. P., Please, C. P., Hanson, M. A., Sibley, C. P., Johnstone, E. D., Sengers, B. G., Lewis, R. M., Glazier, J. D. Integration of computational modeling with membrane transport studies reveals new insights into amino acid exchange transport mechanisms. PMID:25761365

  12. A Graphic Overlay Method for Selection of Osteotomy Site in Chronic Radial Head Dislocation: An Evaluation of 3D-printed Bone Models.

    PubMed

    Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young

    2017-03-01

    Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.

  13. Conflict effects without conflict in anterior cingulate cortex: multiple response effects and context specific representations

    PubMed Central

    Brown, Joshua W.

    2009-01-01

    The error likelihood computational model of anterior cingulate cortex (ACC) (Brown & Braver, 2005) has successfully predicted error likelihood effects, risk prediction effects, and how individual differences in conflict and error likelihood effects vary with trait differences in risk aversion. The same computational model now makes a further prediction that apparent conflict effects in ACC may result in part from an increasing number of simultaneously active responses, regardless of whether or not the cued responses are mutually incompatible. In Experiment 1, the model prediction was tested with a modification of the Eriksen flanker task, in which some task conditions require two otherwise mutually incompatible responses to be generated simultaneously. In that case, the two response processes are no longer in conflict with each other. The results showed small but significant medial PFC effects in the incongruent vs. congruent contrast, despite the absence of response conflict, consistent with model predictions. This is the multiple response effect. Nonetheless, actual response conflict led to greater ACC activation, suggesting that conflict effects are specific to particular task contexts. In Experiment 2, results from a change signal task suggested that the context dependence of conflict signals does not depend on error likelihood effects. Instead, inputs to ACC may reflect complex and task specific representations of motor acts, such as bimanual responses. Overall, the results suggest the existence of a richer set of motor signals monitored by medial PFC and are consistent with distinct effects of multiple responses, conflict, and error likelihood in medial PFC. PMID:19375509

  14. Typical use of inverse dynamics in perceiving motion in autistic adults: Exploring computational principles of perception and action.

    PubMed

    Takamuku, Shinya; Forbes, Paul A G; Hamilton, Antonia F de C; Gomi, Hiroaki

    2018-05-07

    There is increasing evidence for motor difficulties in many people with autism spectrum condition (ASC). These difficulties could be linked to differences in the use of internal models which represent relations between motions and forces/efforts. The use of these internal models may be dependent on the cerebellum which has been shown to be abnormal in autism. Several studies have examined internal computations of forward dynamics (motion from force information) in autism, but few have tested the inverse dynamics computation, that is, the determination of force-related information from motion information. Here, we examined this ability in autistic adults by measuring two perceptual biases which depend on the inverse computation. First, we asked participants whether they experienced a feeling of resistance when moving a delayed cursor, which corresponds to the inertial force of the cursor implied by its motion-both typical and ASC participants reported similar feelings of resistance. Second, participants completed a psychophysical task in which they judged the velocity of a moving hand with or without a visual cue implying inertial force. Both typical and ASC participants perceived the hand moving with the inertial cue to be slower than the hand without it. In both cases, the magnitude of the effects did not differ between the two groups. Our results suggest that the neural systems engaged in the inverse dynamics computation are preserved in ASC, at least in the observed conditions. Autism Res 2018. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. We tested the ability to estimate force information from motion information, which arises from a specific "inverse dynamics" computation. Autistic adults and a matched control group reported feeling a resistive sensation when moving a delayed cursor and also judged a moving hand to be slower when it was pulling a load. These findings both suggest that the ability to estimate force information from motion information is intact in autism. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.

  15. Research and development activities in unified control-structure modeling and design

    NASA Technical Reports Server (NTRS)

    Nayak, A. P.

    1985-01-01

    Results of work sponsored by JPL and other organizations to develop a unified control/structures modeling and design capability for large space structures is presented. Recent analytical results are presented to demonstrate the significant interdependence between structural and control properties. A new design methodology is suggested in which the structure, material properties, dynamic model and control design are all optimized simultaneously. The development of a methodology for global design optimization is recommended as a long term goal. It is suggested that this methodology should be incorporated into computer aided engineering programs, which eventually will be supplemented by an expert system to aid design optimization. Recommendations are also presented for near term research activities at JPL. The key recommendation is to continue the development of integrated dynamic modeling/control design techniques, with special attention given to the development of structural models specially tailored to support design.

  16. A hydrological emulator for global applications - HE v1.0.0

    NASA Astrophysics Data System (ADS)

    Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong

    2018-03-01

    While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.

  17. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  18. Untangling the complexity of blood coagulation network: use of computational modelling in pharmacology and diagnostics.

    PubMed

    Shibeko, Alexey M; Panteleev, Mikhail A

    2016-05-01

    Blood coagulation is a complex biochemical network that plays critical roles in haemostasis (a physiological process that stops bleeding on injury) and thrombosis (pathological vessel occlusion). Both up- and down-regulation of coagulation remain a major challenge for modern medicine, with the ultimate goal to correct haemostasis without causing thrombosis and vice versa. Mathematical/computational modelling is potentially an important tool for understanding blood coagulation disorders and their treatment. It can save a huge amount of time and resources, and provide a valuable alternative or supplement when clinical studies are limited, or not ethical, or technically impossible. This article reviews contemporary state of the art in the modelling of blood coagulation for practical purposes: to reveal the molecular basis of a disease, to understand mechanisms of drug action, to predict pharmacodynamics and drug-drug interactions, to suggest potential drug targets or to improve quality of diagnostics. Different model types and designs used for this are discussed. Functional mechanisms of procoagulant bypassing agents and investigations of coagulation inhibitors were the two particularly popular applications of computational modelling that gave non-trivial results. Yet, like any other tool, modelling has its limitations, mainly determined by insufficient knowledge of the system, uncertainty and unreliability of complex models. We show how to some extent this can be overcome and discuss what can be expected from the mathematical modelling of coagulation in not-so-far future. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  19. Modeling the Dynamics of Disease States in Depression

    PubMed Central

    Demic, Selver; Cheng, Sen

    2014-01-01

    Major depressive disorder (MDD) is a common and costly disorder associated with considerable morbidity, disability, and risk for suicide. The disorder is clinically and etiologically heterogeneous. Despite intense research efforts, the response rates of antidepressant treatments are relatively low and the etiology and progression of MDD remain poorly understood. Here we use computational modeling to advance our understanding of MDD. First, we propose a systematic and comprehensive definition of disease states, which is based on a type of mathematical model called a finite-state machine. Second, we propose a dynamical systems model for the progression, or dynamics, of MDD. The model is abstract and combines several major factors (mechanisms) that influence the dynamics of MDD. We study under what conditions the model can account for the occurrence and recurrence of depressive episodes and how we can model the effects of antidepressant treatments and cognitive behavioral therapy within the same dynamical systems model through changing a small subset of parameters. Our computational modeling suggests several predictions about MDD. Patients who suffer from depression can be divided into two sub-populations: a high-risk sub-population that has a high risk of developing chronic depression and a low-risk sub-population, in which patients develop depression stochastically with low probability. The success of antidepressant treatment is stochastic, leading to widely different times-to-remission in otherwise identical patients. While the specific details of our model might be subjected to criticism and revisions, our approach shows the potential power of computationally modeling depression and the need for different type of quantitative data for understanding depression. PMID:25330102

  20. A Model-Based Approach to Trial-By-Trial P300 Amplitude Fluctuations

    PubMed Central

    Kolossa, Antonio; Fingscheidt, Tim; Wessel, Karl; Kopp, Bruno

    2013-01-01

    It has long been recognized that the amplitude of the P300 component of event-related brain potentials is sensitive to the degree to which eliciting stimuli are surprising to the observers (Donchin, 1981). While Squires et al. (1976) showed and modeled dependencies of P300 amplitudes from observed stimuli on various time scales, Mars et al. (2008) proposed a computational model keeping track of stimulus probabilities on a long-term time scale. We suggest here a computational model which integrates prior information with short-term, long-term, and alternation-based experiential influences on P300 amplitude fluctuations. To evaluate the new model, we measured trial-by-trial P300 amplitude fluctuations in a simple two-choice response time task, and tested the computational models of trial-by-trial P300 amplitudes using Bayesian model evaluation. The results reveal that the new digital filtering (DIF) model provides a superior account of the trial-by-trial P300 amplitudes when compared to both Squires et al.’s (1976) model, and Mars et al.’s (2008) model. We show that the P300-generating system can be described as two parallel first-order infinite impulse response (IIR) low-pass filters and an additional fourth-order finite impulse response (FIR) high-pass filter. Implications of the acquired data are discussed with regard to the neurobiological distinction between short-term, long-term, and working memory as well as from the point of view of predictive coding models and Bayesian learning theories of cortical function. PMID:23404628

Top