Sample records for computational modeling revealed

  1. REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang

    2013-04-30

    Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less

  2. Mirror neurons and imitation: a computationally guided review.

    PubMed

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael

    2006-04-01

    Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.

  3. Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling

    ERIC Educational Resources Information Center

    Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao

    2013-01-01

    Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…

  4. LiPISC: A Lightweight and Flexible Method for Privacy-Aware Intersection Set Computation

    PubMed Central

    Huang, Shiyong; Ren, Yi; Choo, Kim-Kwang Raymond

    2016-01-01

    Privacy-aware intersection set computation (PISC) can be modeled as secure multi-party computation. The basic idea is to compute the intersection of input sets without leaking privacy. Furthermore, PISC should be sufficiently flexible to recommend approximate intersection items. In this paper, we reveal two previously unpublished attacks against PISC, which can be used to reveal and link one input set to another input set, resulting in privacy leakage. We coin these as Set Linkage Attack and Set Reveal Attack. We then present a lightweight and flexible PISC scheme (LiPISC) and prove its security (including against Set Linkage Attack and Set Reveal Attack). PMID:27326763

  5. LiPISC: A Lightweight and Flexible Method for Privacy-Aware Intersection Set Computation.

    PubMed

    Ren, Wei; Huang, Shiyong; Ren, Yi; Choo, Kim-Kwang Raymond

    2016-01-01

    Privacy-aware intersection set computation (PISC) can be modeled as secure multi-party computation. The basic idea is to compute the intersection of input sets without leaking privacy. Furthermore, PISC should be sufficiently flexible to recommend approximate intersection items. In this paper, we reveal two previously unpublished attacks against PISC, which can be used to reveal and link one input set to another input set, resulting in privacy leakage. We coin these as Set Linkage Attack and Set Reveal Attack. We then present a lightweight and flexible PISC scheme (LiPISC) and prove its security (including against Set Linkage Attack and Set Reveal Attack).

  6. Computational Modeling and Treatment Identification in the Myelodysplastic Syndromes.

    PubMed

    Drusbosky, Leylah M; Cogle, Christopher R

    2017-10-01

    This review discusses the need for computational modeling in myelodysplastic syndromes (MDS) and early test results. As our evolving understanding of MDS reveals a molecularly complicated disease, the need for sophisticated computer analytics is required to keep track of the number and complex interplay among the molecular abnormalities. Computational modeling and digital drug simulations using whole exome sequencing data input have produced early results showing high accuracy in predicting treatment response to standard of care drugs. Furthermore, the computational MDS models serve as clinically relevant MDS cell lines for pre-clinical assays of investigational agents. MDS is an ideal disease for computational modeling and digital drug simulations. Current research is focused on establishing the prediction value of computational modeling. Future research will test the clinical advantage of computer-informed therapy in MDS.

  7. Application of Game Theory to Improve the Defense of the Smart Grid

    DTIC Science & Technology

    2012-03-01

    Computer Systems and Networks ...............................................22 2.4.2 Trust Models ...systems. In this environment, developers assumed deterministic communications mediums rather than the “best effort” models provided in most modern... models or computational models to validate the SPSs design. Finally, the study reveals concerns about the performance of load rejection schemes

  8. Computational foundations of the visual number sense.

    PubMed

    Stoianov, Ivilin Peev; Zorzi, Marco

    2017-01-01

    We provide an emergentist perspective on the computational mechanism underlying numerosity perception, its development, and the role of inhibition, based on our deep neural network model. We argue that the influence of continuous visual properties does not challenge the notion of number sense, but reveals limit conditions for the computation that yields invariance in numerosity perception. Alternative accounts should be formalized in a computational model.

  9. Computing by physical interaction in neurons.

    PubMed

    Aur, Dorian; Jog, Mandar; Poznanski, Roman R

    2011-12-01

    The electrodynamics of action potentials represents the fundamental level where information is integrated and processed in neurons. The Hodgkin-Huxley model cannot explain the non-stereotyped spatial charge density dynamics that occur during action potential propagation. Revealed in experiments as spike directivity, the non-uniform charge density dynamics within neurons carry meaningful information and suggest that fragments of information regarding our memories are endogenously stored in structural patterns at a molecular level and are revealed only during spiking activity. The main conceptual idea is that under the influence of electric fields, efficient computation by interaction occurs between charge densities embedded within molecular structures and the transient developed flow of electrical charges. This process of computation underlying electrical interactions and molecular mechanisms at the subcellular level is dissimilar from spiking neuron models that are completely devoid of physical interactions. Computation by interaction describes a more powerful continuous model of computation than the one that consists of discrete steps as represented in Turing machines.

  10. Cloud-based simulations on Google Exacycle reveal ligand modulation of GPCR activation pathways

    NASA Astrophysics Data System (ADS)

    Kohlhoff, Kai J.; Shukla, Diwakar; Lawrenz, Morgan; Bowman, Gregory R.; Konerding, David E.; Belov, Dan; Altman, Russ B.; Pande, Vijay S.

    2014-01-01

    Simulations can provide tremendous insight into the atomistic details of biological mechanisms, but micro- to millisecond timescales are historically only accessible on dedicated supercomputers. We demonstrate that cloud computing is a viable alternative that brings long-timescale processes within reach of a broader community. We used Google's Exacycle cloud-computing platform to simulate two milliseconds of dynamics of a major drug target, the G-protein-coupled receptor β2AR. Markov state models aggregate independent simulations into a single statistical model that is validated by previous computational and experimental results. Moreover, our models provide an atomistic description of the activation of a G-protein-coupled receptor and reveal multiple activation pathways. Agonists and inverse agonists interact differentially with these pathways, with profound implications for drug design.

  11. Modeling the Contribution of Phonotactic Cues to the Problem of Word Segmentation

    ERIC Educational Resources Information Center

    Blanchard, Daniel; Heinz, Jeffrey; Golinkoff, Roberta

    2010-01-01

    How do infants find the words in the speech stream? Computational models help us understand this feat by revealing the advantages and disadvantages of different strategies that infants might use. Here, we outline a computational model of word segmentation that aims both to incorporate cues proposed by language acquisition researchers and to…

  12. Computational Modeling for the Flow Over a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun

    1999-01-01

    The flow over a multi-element airfoil is computed using two two-equation turbulence models. The computations are performed using the INS2D) Navier-Stokes code for two angles of attack. Overset grids are used for the three-element airfoil. The computed results are compared with experimental data for the surface pressure, skin friction coefficient, and velocity magnitude. The computed surface quantities generally agree well with the measurement. The computed results reveal the possible existence of a mixing-layer-like region of flow next to the suction surface of the slat for both angles of attack.

  13. Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases

    NASA Technical Reports Server (NTRS)

    Woodruff, Stephen

    2016-01-01

    NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.

  14. Software for Brain Network Simulations: A Comparative Study

    PubMed Central

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687

  15. A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2016-09-01

    It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena.

  16. Telepresence: A "Real" Component in a Model to Make Human-Computer Interface Factors Meaningful in the Virtual Learning Environment

    ERIC Educational Resources Information Center

    Selverian, Melissa E. Markaridian; Lombard, Matthew

    2009-01-01

    A thorough review of the research relating to Human-Computer Interface (HCI) form and content factors in the education, communication and computer science disciplines reveals strong associations of meaningful perceptual "illusions" with enhanced learning and satisfaction in the evolving classroom. Specifically, associations emerge…

  17. A computational model for epidural electrical stimulation of spinal sensorimotor circuits.

    PubMed

    Capogrosso, Marco; Wenger, Nikolaus; Raspopovic, Stanisa; Musienko, Pavel; Beauparlant, Janine; Bassi Luciani, Lorenzo; Courtine, Grégoire; Micera, Silvestro

    2013-12-04

    Epidural electrical stimulation (EES) of lumbosacral segments can restore a range of movements after spinal cord injury. However, the mechanisms and neural structures through which EES facilitates movement execution remain unclear. Here, we designed a computational model and performed in vivo experiments to investigate the type of fibers, neurons, and circuits recruited in response to EES. We first developed a realistic finite element computer model of rat lumbosacral segments to identify the currents generated by EES. To evaluate the impact of these currents on sensorimotor circuits, we coupled this model with an anatomically realistic axon-cable model of motoneurons, interneurons, and myelinated afferent fibers for antagonistic ankle muscles. Comparisons between computer simulations and experiments revealed the ability of the model to predict EES-evoked motor responses over multiple intensities and locations. Analysis of the recruited neural structures revealed the lack of direct influence of EES on motoneurons and interneurons. Simulations and pharmacological experiments demonstrated that EES engages spinal circuits trans-synaptically through the recruitment of myelinated afferent fibers. The model also predicted the capacity of spatially distinct EES to modulate side-specific limb movements and, to a lesser extent, extension versus flexion. These predictions were confirmed during standing and walking enabled by EES in spinal rats. These combined results provide a mechanistic framework for the design of spinal neuroprosthetic systems to improve standing and walking after neurological disorders.

  18. Teachers and Students' Conceptions of Computer-Based Models in the Context of High School Chemistry: Elicitations at the Pre-intervention Stage

    NASA Astrophysics Data System (ADS)

    Waight, Noemi; Gillmeister, Kristina

    2014-04-01

    This study examined teachers' and students' initial conceptions of computer-based models—Flash and NetLogo models—and documented how teachers and students reconciled notions of multiple representations featuring macroscopic, submicroscopic and symbolic representations prior to actual intervention in eight high school chemistry classrooms. Individual in-depth interviews were conducted with 32 students and 6 teachers. Findings revealed an interplay of complex factors that functioned as opportunities and obstacles in the implementation of technologies in science classrooms. Students revealed preferences for the Flash models as opposed to the open-ended NetLogo models. Altogether, due to lack of content and modeling background knowledge, students experienced difficulties articulating coherent and blended understandings of multiple representations. Concurrently, while the aesthetic and interactive features of the models were of great value, they did not sustain students' initial curiosity and opportunities to improve understandings about chemistry phenomena. Most teachers recognized direct alignment of the Flash model with their existing curriculum; however, the benefits were relegated to existing procedural and passive classroom practices. The findings have implications for pedagogical approaches that address the implementation of computer-based models, function of models, models as multiple representations and the role of background knowledge and cognitive load, and the role of teacher vision and classroom practices.

  19. In vitro protease cleavage and computer simulations reveal the HIV-1 capsid maturation pathway

    NASA Astrophysics Data System (ADS)

    Ning, Jiying; Erdemci-Tandogan, Gonca; Yufenyuy, Ernest L.; Wagner, Jef; Himes, Benjamin A.; Zhao, Gongpu; Aiken, Christopher; Zandi, Roya; Zhang, Peijun

    2016-12-01

    HIV-1 virions assemble as immature particles containing Gag polyproteins that are processed by the viral protease into individual components, resulting in the formation of mature infectious particles. There are two competing models for the process of forming the mature HIV-1 core: the disassembly and de novo reassembly model and the non-diffusional displacive model. To study the maturation pathway, we simulate HIV-1 maturation in vitro by digesting immature particles and assembled virus-like particles with recombinant HIV-1 protease and monitor the process with biochemical assays and cryoEM structural analysis in parallel. Processing of Gag in vitro is accurate and efficient and results in both soluble capsid protein and conical or tubular capsid assemblies, seemingly converted from immature Gag particles. Computer simulations further reveal probable assembly pathways of HIV-1 capsid formation. Combining the experimental data and computer simulations, our results suggest a sequential combination of both displacive and disassembly/reassembly processes for HIV-1 maturation.

  20. Application of SLURM, BOINC, and GlusterFS as Software System for Sustainable Modeling and Data Analytics

    NASA Astrophysics Data System (ADS)

    Kashansky, Vladislav V.; Kaftannikov, Igor L.

    2018-02-01

    Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.

  1. Open solutions to distributed control in ground tracking stations

    NASA Technical Reports Server (NTRS)

    Heuser, William Randy

    1994-01-01

    The advent of high speed local area networks has made it possible to interconnect small, powerful computers to function together as a single large computer. Today, distributed computer systems are the new paradigm for large scale computing systems. However, the communications provided by the local area network is only one part of the solution. The services and protocols used by the application programs to communicate across the network are as indispensable as the local area network. And the selection of services and protocols that do not match the system requirements will limit the capabilities, performance, and expansion of the system. Proprietary solutions are available but are usually limited to a select set of equipment. However, there are two solutions based on 'open' standards. The question that must be answered is 'which one is the best one for my job?' This paper examines a model for tracking stations and their requirements for interprocessor communications in the next century. The model and requirements are matched with the model and services provided by the five different software architectures and supporting protocol solutions. Several key services are examined in detail to determine which services and protocols most closely match the requirements for the tracking station environment. The study reveals that the protocols are tailored to the problem domains for which they were originally designed. Further, the study reveals that the process control model is the closest match to the tracking station model.

  2. Computer simulation for integrated pest management of spruce budworms

    Treesearch

    Carroll B. Williams; Patrick J. Shea

    1982-01-01

    Some field studies of the effects of various insecticides on the spruce budworm (Choristoneura sp.) and their parasites have shown severe suppression of host (budworm) populations and increased parasitism after treatment. Computer simulation using hypothetical models of spruce budworm-parasite systems based on these field data revealed that (1)...

  3. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  4. A Trans-omics Mathematical Analysis Reveals Novel Functions of the Ornithine Metabolic Pathway in Cancer Stem Cells

    NASA Astrophysics Data System (ADS)

    Koseki, Jun; Matsui, Hidetoshi; Konno, Masamitsu; Nishida, Naohiro; Kawamoto, Koichi; Kano, Yoshihiro; Mori, Masaki; Doki, Yuichiro; Ishii, Hideshi

    2016-02-01

    Bioinformatics and computational modelling are expected to offer innovative approaches in human medical science. In the present study, we performed computational analyses and made predictions using transcriptome and metabolome datasets obtained from fluorescence-based visualisations of chemotherapy-resistant cancer stem cells (CSCs) in the human oesophagus. This approach revealed an uncharacterized role for the ornithine metabolic pathway in the survival of chemotherapy-resistant CSCs. The present study fastens this rationale for further characterisation that may lead to the discovery of innovative drugs against robust CSCs.

  5. Fathead Minnow Steroidogenesis: In Silico Analyses Reveals Tradeoffs Between Nominal Target Efficacy and Robustness to Cross-talk

    EPA Science Inventory

    This paper presents the formulation and evaluation of a mechanistic mathematical model of fathead minnow ovarian steroidogenesis. The model presented in the present study was adpated from other models developed as part of an integrated, multi-disciplinary computational toxicolog...

  6. Molecular deconstruction, detection, and computational prediction of microenvironment-modulated cellular responses to cancer therapeutics

    PubMed Central

    LaBarge, Mark A; Parvin, Bahram; Lorens, James B

    2014-01-01

    The field of bioengineering has pioneered the application of new precision fabrication technologies to model the different geometric, physical or molecular components of tissue microenvironments on solid-state substrata. Tissue engineering approaches building on these advances are used to assemble multicellular mimetic-tissues where cells reside within defined spatial contexts. The functional responses of cells in fabricated microenvironments has revealed a rich interplay between the genome and extracellular effectors in determining cellular phenotypes, and in a number of cases has revealed the dominance of microenvironment over genotype. Precision bioengineered substrata are limited to a few aspects, whereas cell/tissue-derived microenvironments have many undefined components. Thus introducing a computational module may serve to integrate these types of platforms to create reasonable models of drug responses in human tissues. This review discusses how combinatorial microenvironment microarrays and other biomimetic microenvironments have revealed emergent properties of cells in particular microenvironmental contexts, the platforms that can measure phenotypic changes within those contexts, and the computational tools that can unify the microenvironment-imposed functional phenotypes with underlying constellations of proteins and genes. Ultimately we propose that a merger of these technologies will enable more accurate pre-clinical drug discovery. PMID:24582543

  7. The structure and timescales of heat perception in larval zebrafish.

    PubMed

    Haesemeyer, Martin; Robson, Drew N; Li, Jennifer M; Schier, Alexander F; Engert, Florian

    2015-11-25

    Avoiding temperatures outside the physiological range is critical for animal survival, but how temperature dynamics are transformed into behavioral output is largely not understood. Here, we used an infrared laser to challenge freely swimming larval zebrafish with "white-noise" heat stimuli and built quantitative models relating external sensory information and internal state to behavioral output. These models revealed that larval zebrafish integrate temperature information over a time-window of 400 ms preceding a swimbout and that swimming is suppressed right after the end of a bout. Our results suggest that larval zebrafish compute both an integral and a derivative across heat in time to guide their next movement. Our models put important constraints on the type of computations that occur in the nervous system and reveal principles of how somatosensory temperature information is processed to guide behavioral decisions such as sensitivity to both absolute levels and changes in stimulation.

  8. Computational analysis of the Phanerochaete chrysosporium v2.0 genome database and mass spectrometry identiWcation of peptides in ligninolytic cultures reveal complex mixtures of secreted proteins

    Treesearch

    Amber Vanden Wymelenberg; Patrick Minges; Grzegorz Sabat; Diego Martinez; Andrea Aerts; Asaf Salamov; Igor Grigoriev; Harris Shapiro; Nik Putnam; Paula Belinky; Carlos Dosoretz; Jill Gaskell; Phil Kersten; Dan Cullen

    2006-01-01

    The white-rot basidiomycete Phanerochaete chrysosporium employs extracellular enzymes to completely degrade the major polymers of wood: cellulose, hemicellulose, and lignin. Analysis of a total of 10,048 v2.1 gene models predicts 769 secreted proteins, a substantial increase over the 268 models identified in the earlier database (v1.0). Within the v2.1 ‘computational...

  9. Catalytic ignition model in a monolithic reactor with in-depth reaction

    NASA Technical Reports Server (NTRS)

    Tien, Ta-Ching; Tien, James S.

    1990-01-01

    Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.

  10. Mapping nonlinear receptive field structure in primate retina at single cone resolution

    PubMed Central

    Li, Peter H; Greschner, Martin; Gunning, Deborah E; Mathieson, Keith; Sher, Alexander; Litke, Alan M; Paninski, Liam

    2015-01-01

    The function of a neural circuit is shaped by the computations performed by its interneurons, which in many cases are not easily accessible to experimental investigation. Here, we elucidate the transformation of visual signals flowing from the input to the output of the primate retina, using a combination of large-scale multi-electrode recordings from an identified ganglion cell type, visual stimulation targeted at individual cone photoreceptors, and a hierarchical computational model. The results reveal nonlinear subunits in the circuity of OFF midget ganglion cells, which subserve high-resolution vision. The model explains light responses to a variety of stimuli more accurately than a linear model, including stimuli targeted to cones within and across subunits. The recovered model components are consistent with known anatomical organization of midget bipolar interneurons. These results reveal the spatial structure of linear and nonlinear encoding, at the resolution of single cells and at the scale of complete circuits. DOI: http://dx.doi.org/10.7554/eLife.05241.001 PMID:26517879

  11. CFD: A Castle in the Sand?

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.

    2004-01-01

    The computational simulation community is not routinely publishing independently verifiable tests to accompany new models or algorithms. A survey reveals that only 22% of new models published are accompanied by tests suitable for independently verifying the new model. As the community develops larger codes with increased functionality, and hence increased complexity in terms of the number of building block components and their interactions, it becomes prohibitively expensive for each development group to derive the appropriate tests for each component. Therefore, the computational simulation community is building its collective castle on a very shaky foundation of components with unpublished and unrepeatable verification tests. The computational simulation community needs to begin publishing component level verification tests before the tide of complexity undermines its foundation.

  12. Product placement of computer games in cyberspace.

    PubMed

    Yang, Heng-Li; Wang, Cheng-Shu

    2008-08-01

    Computer games are considered an emerging media and are even regarded as an advertising channel. By a three-phase experiment, this study investigated the advertising effectiveness of computer games for different product placement forms, product types, and their combinations. As the statistical results revealed, computer games are appropriate for placement advertising. Additionally, different product types and placement forms produced different advertising effectiveness. Optimum combinations of product types and placement forms existed. An advertisement design model is proposed for use in game design environments. Some suggestions are given for advertisers and game companies respectively.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines

    Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less

  14. Radiotherapy and chemotherapy change vessel tree geometry and metastatic spread in a small cell lung cancer xenograft mouse tumor model

    PubMed Central

    Bethge, Anja; Schumacher, Udo

    2017-01-01

    Background Tumor vasculature is critical for tumor growth, formation of distant metastases and efficiency of radio- and chemotherapy treatments. However, how the vasculature itself is affected during cancer treatment regarding to the metastatic behavior has not been thoroughly investigated. Therefore, the aim of this study was to analyze the influence of hypofractionated radiotherapy and cisplatin chemotherapy on vessel tree geometry and metastasis formation in a small cell lung cancer xenograft mouse tumor model to investigate the spread of malignant cells during different treatments modalities. Methods The biological data gained during these experiments were fed into our previously developed computer model “Cancer and Treatment Simulation Tool” (CaTSiT) to model the growth of the primary tumor, its metastatic deposit and also the influence on different therapies. Furthermore, we performed quantitative histology analyses to verify our predictions in xenograft mouse tumor model. Results According to the computer simulation the number of cells engrafting must vary considerably to explain the different weights of the primary tumor at the end of the experiment. Once a primary tumor is established, the fractal dimension of its vasculature correlates with the tumor size. Furthermore, the fractal dimension of the tumor vasculature changes during treatment, indicating that the therapy affects the blood vessels’ geometry. We corroborated these findings with a quantitative histological analysis showing that the blood vessel density is depleted during radiotherapy and cisplatin chemotherapy. The CaTSiT computer model reveals that chemotherapy influences the tumor’s therapeutic susceptibility and its metastatic spreading behavior. Conclusion Using a system biological approach in combination with xenograft models and computer simulations revealed that the usage of chemotherapy and radiation therapy determines the spreading behavior by changing the blood vessel geometry of the primary tumor. PMID:29107953

  15. Integration of computational modeling with membrane transport studies reveals new insights into amino acid exchange transport mechanisms

    PubMed Central

    Widdows, Kate L.; Panitchob, Nuttanont; Crocker, Ian P.; Please, Colin P.; Hanson, Mark A.; Sibley, Colin P.; Johnstone, Edward D.; Sengers, Bram G.; Lewis, Rohan M.; Glazier, Jocelyn D.

    2015-01-01

    Uptake of system L amino acid substrates into isolated placental plasma membrane vesicles in the absence of opposing side amino acid (zero-trans uptake) is incompatible with the concept of obligatory exchange, where influx of amino acid is coupled to efflux. We therefore hypothesized that system L amino acid exchange transporters are not fully obligatory and/or that amino acids are initially present inside the vesicles. To address this, we combined computational modeling with vesicle transport assays and transporter localization studies to investigate the mechanisms mediating [14C]l-serine (a system L substrate) transport into human placental microvillous plasma membrane (MVM) vesicles. The carrier model provided a quantitative framework to test the 2 hypotheses that l-serine transport occurs by either obligate exchange or nonobligate exchange coupled with facilitated transport (mixed transport model). The computational model could only account for experimental [14C]l-serine uptake data when the transporter was not exclusively in exchange mode, best described by the mixed transport model. MVM vesicle isolates contained endogenous amino acids allowing for potential contribution to zero-trans uptake. Both L-type amino acid transporter (LAT)1 and LAT2 subtypes of system L were distributed to MVM, with l-serine transport attributed to LAT2. These findings suggest that exchange transporters do not function exclusively as obligate exchangers.—Widdows, K. L., Panitchob, N., Crocker, I. P., Please, C. P., Hanson, M. A., Sibley, C. P., Johnstone, E. D., Sengers, B. G., Lewis, R. M., Glazier, J. D. Integration of computational modeling with membrane transport studies reveals new insights into amino acid exchange transport mechanisms. PMID:25761365

  16. Using genetic information while protecting the privacy of the soul.

    PubMed

    Moor, J H

    1999-01-01

    Computing plays an important role in genetics (and vice versa). Theoretically, computing provides a conceptual model for the function and malfunction of our genetic machinery. Practically, contemporary computers and robots equipped with advanced algorithms make the revelation of the complete human genome imminent--computers are about to reveal our genetic souls for the first time. Ethically, computers help protect privacy by restricting access in sophisticated ways to genetic information. But the inexorable fact that computers will increasingly collect, analyze, and disseminate abundant amounts of genetic information made available through the genetic revolution, not to mention that inexpensive computing devices will make genetic information gathering easier, underscores the need for strong and immediate privacy legislation.

  17. Understanding Islamist political violence through computational social simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G

    Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less

  18. Flow and Turbulence Modeling and Computation of Shock Buffet Onset for Conventional and Supercritical Airfoils

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    1998-01-01

    Flow and turbulence models applied to the problem of shock buffet onset are studied. The accuracy of the interactive boundary layer and the thin-layer Navier-Stokes equations solved with recent upwind techniques using similar transport field equation turbulence models is assessed for standard steady test cases, including conditions having significant shock separation. The two methods are found to compare well in the shock buffet onset region of a supercritical airfoil that involves strong trailing-edge separation. A computational analysis using the interactive-boundary layer has revealed a Reynolds scaling effect in the shock buffet onset of the supercritical airfoil, which compares well with experiment. The methods are next applied to a conventional airfoil. Steady shock-separated computations of the conventional airfoil with the two methods compare well with experiment. Although the interactive boundary layer computations in the shock buffet region compare well with experiment for the conventional airfoil, the thin-layer Navier-Stokes computations do not. These findings are discussed in connection with possible mechanisms important in the onset of shock buffet and the constraints imposed by current numerical modeling techniques.

  19. Use of a statistical model of the whole femur in a large scale, multi-model study of femoral neck fracture risk.

    PubMed

    Bryan, Rebecca; Nair, Prasanth B; Taylor, Mark

    2009-09-18

    Interpatient variability is often overlooked in orthopaedic computational studies due to the substantial challenges involved in sourcing and generating large numbers of bone models. A statistical model of the whole femur incorporating both geometric and material property variation was developed as a potential solution to this problem. The statistical model was constructed using principal component analysis, applied to 21 individual computer tomography scans. To test the ability of the statistical model to generate realistic, unique, finite element (FE) femur models it was used as a source of 1000 femurs to drive a study on femoral neck fracture risk. The study simulated the impact of an oblique fall to the side, a scenario known to account for a large proportion of hip fractures in the elderly and have a lower fracture load than alternative loading approaches. FE model generation, application of subject specific loading and boundary conditions, FE processing and post processing of the solutions were completed automatically. The generated models were within the bounds of the training data used to create the statistical model with a high mesh quality, able to be used directly by the FE solver without remeshing. The results indicated that 28 of the 1000 femurs were at highest risk of fracture. Closer analysis revealed the percentage of cortical bone in the proximal femur to be a crucial differentiator between the failed and non-failed groups. The likely fracture location was indicated to be intertrochantic. Comparison to previous computational, clinical and experimental work revealed support for these findings.

  20. Molecular deconstruction, detection, and computational prediction of microenvironment-modulated cellular responses to cancer therapeutics.

    PubMed

    Labarge, Mark A; Parvin, Bahram; Lorens, James B

    2014-04-01

    The field of bioengineering has pioneered the application of new precision fabrication technologies to model the different geometric, physical or molecular components of tissue microenvironments on solid-state substrata. Tissue engineering approaches building on these advances are used to assemble multicellular mimetic-tissues where cells reside within defined spatial contexts. The functional responses of cells in fabricated microenvironments have revealed a rich interplay between the genome and extracellular effectors in determining cellular phenotypes and in a number of cases have revealed the dominance of microenvironment over genotype. Precision bioengineered substrata are limited to a few aspects, whereas cell/tissue-derived microenvironments have many undefined components. Thus, introducing a computational module may serve to integrate these types of platforms to create reasonable models of drug responses in human tissues. This review discusses how combinatorial microenvironment microarrays and other biomimetic microenvironments have revealed emergent properties of cells in particular microenvironmental contexts, the platforms that can measure phenotypic changes within those contexts, and the computational tools that can unify the microenvironment-imposed functional phenotypes with underlying constellations of proteins and genes. Ultimately we propose that a merger of these technologies will enable more accurate pre-clinical drug discovery. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Chiral phosphoric acid catalysis: from numbers to insights.

    PubMed

    Maji, Rajat; Mallojjala, Sharath Chandra; Wheeler, Steven E

    2018-02-19

    Chiral phosphoric acids (CPAs) have emerged as powerful organocatalysts for asymmetric reactions, and applications of computational quantum chemistry have revealed important insights into the activity and selectivity of these catalysts. In this tutorial review, we provide an overview of computational tools at the disposal of computational organic chemists and demonstrate their application to a wide array of CPA catalysed reactions. Predictive models of the stereochemical outcome of these reactions are discussed along with specific examples of representative reactions and an outlook on remaining challenges in this area.

  2. Bayesian Mapping Reveals That Attention Boosts Neural Responses to Predicted and Unpredicted Stimuli.

    PubMed

    Garrido, Marta I; Rowe, Elise G; Halász, Veronika; Mattingley, Jason B

    2018-05-01

    Predictive coding posits that the human brain continually monitors the environment for regularities and detects inconsistencies. It is unclear, however, what effect attention has on expectation processes, as there have been relatively few studies and the results of these have yielded contradictory findings. Here, we employed Bayesian model comparison to adjudicate between 2 alternative computational models. The "Opposition" model states that attention boosts neural responses equally to predicted and unpredicted stimuli, whereas the "Interaction" model assumes that attentional boosting of neural signals depends on the level of predictability. We designed a novel, audiospatial attention task that orthogonally manipulated attention and prediction by playing oddball sequences in either the attended or unattended ear. We observed sensory prediction error responses, with electroencephalography, across all attentional manipulations. Crucially, posterior probability maps revealed that, overall, the Opposition model better explained scalp and source data, suggesting that attention boosts responses to predicted and unpredicted stimuli equally. Furthermore, Dynamic Causal Modeling showed that these Opposition effects were expressed in plastic changes within the mismatch negativity network. Our findings provide empirical evidence for a computational model of the opposing interplay of attention and expectation in the brain.

  3. MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models

    DOE PAGES

    Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines

    2016-08-03

    Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less

  4. Computational analysis of an aortic valve jet

    NASA Astrophysics Data System (ADS)

    Shadden, Shawn C.; Astorino, Matteo; Gerbeau, Jean-Frédéric

    2009-11-01

    In this work we employ a coupled FSI scheme using an immersed boundary method to simulate flow through a realistic deformable, 3D aortic valve model. This data was used to compute Lagrangian coherent structures, which revealed flow separation from the valve leaflets during systole, and correspondingly, the boundary between the jet of ejected fluid and the regions of separated, recirculating flow. Advantages of computing LCS in multi-dimensional FSI models of the aortic valve are twofold. For one, the quality and effectiveness of existing clinical indices used to measure aortic jet size can be tested by taking advantage of the accurate measure of the jet area derived from LCS. Secondly, as an ultimate goal, a reliable computational framework for the assessment of the aortic valve stenosis could be developed.

  5. Computational Aerodynamic Modeling of Small Quadcopter Vehicles

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.

    2017-01-01

    High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.

  6. Computational Modeling and Real-Time Control of Patient-Specific Laser Treatment of Cancer

    PubMed Central

    Fuentes, D.; Oden, J. T.; Diller, K. R.; Hazle, J. D.; Elliott, A.; Shetty, A.; Stafford, R. J.

    2014-01-01

    An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging (MRTI). The system is built on what can be referred to as cyberinfrastructure - a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in-vivo, canine prostate. Over the course of an 18 minute laser induced thermal therapy (LITT) performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5°C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post operative histology of the canine prostate reveal that the damage region was within the targeted 1.2cm diameter treatment objective. PMID:19148754

  7. Computational modeling and real-time control of patient-specific laser treatment of cancer.

    PubMed

    Fuentes, D; Oden, J T; Diller, K R; Hazle, J D; Elliott, A; Shetty, A; Stafford, R J

    2009-04-01

    An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging. The system is built on what can be referred to as cyberinfrastructure-a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in vivo, canine prostate. Over the course of an 18 min laser-induced thermal therapy performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real-time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5 degrees C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post-operative histology of the canine prostate reveal that the damage region was within the targeted 1.2 cm diameter treatment objective.

  8. A cost-utility analysis of the use of preoperative computed tomographic angiography in abdomen-based perforator flap breast reconstruction.

    PubMed

    Offodile, Anaeze C; Chatterjee, Abhishek; Vallejo, Sergio; Fisher, Carla S; Tchou, Julia C; Guo, Lifei

    2015-04-01

    Computed tomographic angiography is a diagnostic tool increasingly used for preoperative vascular mapping in abdomen-based perforator flap breast reconstruction. This study compared the use of computed tomographic angiography and the conventional practice of Doppler ultrasonography only in postmastectomy reconstruction using a cost-utility model. Following a comprehensive literature review, a decision analytic model was created using the three most clinically relevant health outcomes in free autologous breast reconstruction with computed tomographic angiography versus Doppler ultrasonography only. Cost and utility estimates for each health outcome were used to derive the quality-adjusted life-years and incremental cost-utility ratio. One-way sensitivity analysis was performed to scrutinize the robustness of the authors' results. Six studies and 782 patients were identified. Cost-utility analysis revealed a baseline cost savings of $3179, a gain in quality-adjusted life-years of 0.25. This yielded an incremental cost-utility ratio of -$12,716, implying a dominant choice favoring preoperative computed tomographic angiography. Sensitivity analysis revealed that computed tomographic angiography was costlier when the operative time difference between the two techniques was less than 21.3 minutes. However, the clinical advantage of computed tomographic angiography over Doppler ultrasonography only showed that computed tomographic angiography would still remain the cost-effective option even if it offered no additional operating time advantage. The authors' results show that computed tomographic angiography is a cost-effective technology for identifying lower abdominal perforators for autologous breast reconstruction. Although the perfect study would be a randomized controlled trial of the two approaches with true cost accrual, the authors' results represent the best available evidence.

  9. Self-assembly of micelles in organic solutions of lecithin and bile salt: Mesoscale computer simulation

    NASA Astrophysics Data System (ADS)

    Markina, A.; Ivanov, V.; Komarov, P.; Khokhlov, A.; Tung, S.-H.

    2016-11-01

    We propose a coarse-grained model for studying the effects of adding bile salt to lecithin organosols by means of computer simulation. This model allows us to reveal the mechanisms of experimentally observed increasing of viscosity upon increasing the bile salt concentration. We show that increasing the bile salt to lecithin molar ratio induces the growth of elongated micelles of ellipsoidal and cylindrical shape due to incorporation of disklike bile salt molecules. These wormlike micelles can entangle into transient network displaying perceptible viscoelastic properties.

  10. Computation material science of structural-phase transformation in casting aluminium alloys

    NASA Astrophysics Data System (ADS)

    Golod, V. M.; Dobosh, L. Yu

    2017-04-01

    Successive stages of computer simulation the formation of the casting microstructure under non-equilibrium conditions of crystallization of multicomponent aluminum alloys are presented. On the basis of computer thermodynamics and heat transfer during solidification of macroscale shaped castings are specified the boundary conditions of local heat exchange at mesoscale modeling of non-equilibrium formation the solid phase and of the component redistribution between phases during coalescence of secondary dendrite branches. Computer analysis of structural - phase transitions based on the principle of additive physico-chemical effect of the alloy components in the process of diffusional - capillary morphological evolution of the dendrite structure and the o of local dendrite heterogeneity which stochastic nature and extent are revealed under metallographic study and modeling by the Monte Carlo method. The integrated computational materials science tools at researches of alloys are focused and implemented on analysis the multiple-factor system of casting processes and prediction of casting microstructure.

  11. Radar Detection Models in Computer Supported Naval War Games

    DTIC Science & Technology

    1979-06-08

    revealed a requirement for the effective centralized manage- ment of computer supported war game development and employment in the U.S. Navy. A...considerations and supports the requirement for centralized Io 97 management of computerized war game development . Therefore it is recommended that a central...managerial and fiscal authority be estab- lished for computerized tactical war game development . This central authority should ensure that new games

  12. Development of a computer-simulation model for a plant-nematode system.

    PubMed

    Ferris, H

    1976-07-01

    A computer-simulation model (MELSIM) of a Meloidogyne-grapevine system is developed. The objective is to attempt a holistic approach to the study of nematode population dynamics by using experimental data from controlled environmental conditions. A simulator with predictive ability would be useful in considering pest management alternatives and in teaching. Rates of flow and interaction between the components of the system are governed by environmental conditions. Equations for these rates are determined by fitting curves to data from controlled environment studies. Development of the model and trial simulations have revealed deficiencies in understanding of the system and identified areas where further research is necessary.

  13. Exploring Classroom Interaction with Dynamic Social Network Analysis

    ERIC Educational Resources Information Center

    Bokhove, Christian

    2018-01-01

    This article reports on an exploratory project in which technology and dynamic social network analysis (SNA) are used for modelling classroom interaction. SNA focuses on the links between social actors, draws on graphic imagery to reveal and display the patterning of those links, and develops mathematical and computational models to describe and…

  14. A novel pH-responsive interpolyelectrolyte hydrogel complex for the oral delivery of levodopa. Part I. IPEC modeling and synthesis.

    PubMed

    Ngwuluka, Ndidi C; Choonara, Yahya E; Kumar, Pradeep; du Toit, Lisa C; Khan, Riaz A; Pillay, Viness

    2015-03-01

    This study was undertaken to synthesize an interpolyelectrolyte complex (IPEC) of polymethacrylate (E100) and sodium carboxymethylcellulose (NaCMC) to form a polymeric hydrogel material for application in specialized oral drug delivery of sensitive levodopa. Computational modeling was employed to proffer insight into the interactions between the polymers. In addition, the reactional profile of NaCMC and polymethacrylate was elucidated using molecular mechanics energy relationships (MMER) and molecular dynamics simulations (MDS) by exploring the spatial disposition of NaCMC and E100 with respect to each other. Computational modeling revealed that the formation of the IPEC was due to strong ionic associations, hydrogen bonding, and hydrophilic interactions. The computational results corroborated well with the experimental and the analytical data. © 2014 Wiley Periodicals, Inc.

  15. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  16. Swinging Atwood's Machine

    NASA Astrophysics Data System (ADS)

    Tufillaro, Nicholas B.; Abbott, Tyler A.; Griffiths, David J.

    1984-10-01

    We examine the motion of an Atwood's Machine in which one of the masses is allowed to swing in a plane. Computer studies reveal a rich variety of trajectories. The orbits are classified (bounded, periodic, singular, and terminating), and formulas for the critical mass ratios are developed. Perturbative techniques yield good approximations to the computer-generated trajectories. The model constitutes a simple example of a nonlinear dynamical system with two degrees of freedom.

  17. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    NASA Astrophysics Data System (ADS)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  18. Hill Problem Analytical Theory to the Order Four. Application to the Computation of Frozen Orbits around Planetary Satellites

    NASA Technical Reports Server (NTRS)

    Lara, Martin; Palacian, Jesus F.

    2007-01-01

    Frozen orbits of the Hill problem are determined in the double averaged problem, where short and long period terms are removed by means of Lie transforms. The computation of initial conditions of corresponding quasi periodic solutions in the non-averaged problem is straightforward for the perturbation method used provides the explicit equations of the transformation that connects the averaged and non-averaged models. A fourth order analytical theory reveals necessary for the accurate computation of quasi periodic, frozen orbits.

  19. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    NASA Astrophysics Data System (ADS)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  20. Study of basic computer competence among public health nurses in Taiwan.

    PubMed

    Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling

    2004-03-01

    Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future.

  1. Modelling NOX concentrations through CFD-RANS in an urban hot-spot using high resolution traffic emissions and meteorology from a mesoscale model

    NASA Astrophysics Data System (ADS)

    Sanchez, Beatriz; Santiago, Jose Luis; Martilli, Alberto; Martin, Fernando; Borge, Rafael; Quaassdorff, Christina; de la Paz, David

    2017-08-01

    Air quality management requires more detailed studies about air pollution at urban and local scale over long periods of time. This work focuses on obtaining the spatial distribution of NOx concentration averaged over several days in a heavily trafficked urban area in Madrid (Spain) using a computational fluid dynamics (CFD) model. A methodology based on weighted average of CFD simulations is applied computing the time evolution of NOx dispersion as a sequence of steady-state scenarios taking into account the actual atmospheric conditions. The inputs of emissions are estimated from the traffic emission model and the meteorological information used is derived from a mesoscale model. Finally, the computed concentration map correlates well with 72 passive samplers deployed in the research area. This work reveals the potential of using urban mesoscale simulations together with detailed traffic emissions so as to provide accurate maps of pollutant concentration at microscale using CFD simulations.

  2. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  3. Continuous attractor network models of grid cell firing based on excitatory–inhibitory interactions

    PubMed Central

    Shipston‐Sharman, Oliver; Solanka, Lukas

    2016-01-01

    Abstract Neurons in the medial entorhinal cortex encode location through spatial firing fields that have a grid‐like organisation. The challenge of identifying mechanisms for grid firing has been addressed through experimental and theoretical investigations of medial entorhinal circuits. Here, we discuss evidence for continuous attractor network models that account for grid firing by synaptic interactions between excitatory and inhibitory cells. These models assume that grid‐like firing patterns are the result of computation of location from velocity inputs, with additional spatial input required to oppose drift in the attractor state. We focus on properties of continuous attractor networks that are revealed by explicitly considering excitatory and inhibitory neurons, their connectivity and their membrane potential dynamics. Models at this level of detail can account for theta‐nested gamma oscillations as well as grid firing, predict spatial firing of interneurons as well as excitatory cells, show how gamma oscillations can be modulated independently from spatial computations, reveal critical roles for neuronal noise, and demonstrate that only a subset of excitatory cells in a network need have grid‐like firing fields. Evaluating experimental data against predictions from detailed network models will be important for establishing the mechanisms mediating grid firing. PMID:27870120

  4. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  5. Growth Dynamics of Information Search Services

    ERIC Educational Resources Information Center

    Lindquist, Mats G.

    1978-01-01

    An analysis of computer-based search services (ISSs) from a system's viewpoint, using a continuous simulation model to reveal growth and stagnation of a typical system is presented, as well as an analysis of decision making for an ISS. (Author/MBR)

  6. Early Childhood Teacher Preparation: A Tale of Authors and Multimedia, A Model of Technology Integration Described.

    ERIC Educational Resources Information Center

    Wetzel, Keith; McLean, S. V.

    1997-01-01

    Describes collaboration of two teacher educators, one in early childhood language arts and one in computers in education. Discusses advantages and disadvantages and extensions of this model, including how a college-wide survey revealed that students in teamed courses are better prepared to teach and learn with technology. (DR)

  7. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  8. The nature of the (visualization) game: Challenges and opportunities from computational geophysics

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2016-12-01

    As the geosciences enters the era of big data, modeling and visualization become increasingly vital tools for discovery, understanding, education, and communication. Here, we focus on modeling and visualization of the structure and dynamics of the Earth's surface and interior. The past decade has seen accelerated data acquisition, including higher resolution imaging and modeling of Earth's deep interior, complex models of geodynamics, and high resolution topographic imaging of the changing surface, with an associated acceleration of computational modeling through better scientific software, increased computing capability, and the use of innovative methods of scientific visualization. The role of modeling is to describe a system, answer scientific questions, and test hypotheses; the term "model" encompasses mathematical models, computational models, physical models, conceptual models, statistical models, and visual models of a structure or process. These different uses of the term require thoughtful communication to avoid confusion. Scientific visualization is integral to every aspect of modeling. Not merely a means of communicating results, the best uses of visualization enable scientists to interact with their data, revealing the characteristics of the data and models to enable better interpretation and inform the direction of future investigation. Innovative immersive technologies like virtual reality, augmented reality, and remote collaboration techniques, are being adapted more widely and are a magnet for students. Time-varying or transient phenomena are especially challenging to model and to visualize; researchers and students may need to investigate the role of initial conditions in driving phenomena, while nonlinearities in the governing equations of many Earth systems make the computations and resulting visualization especially challenging. Training students how to use, design, build, and interpret scientific modeling and visualization tools prepares them to better understand the nature of complex, multiscale geoscience data.

  9. Role of a computer-generated three-dimensional laryngeal model in anatomy teaching for advanced learners.

    PubMed

    Tan, S; Hu, A; Wilson, T; Ladak, H; Haase, P; Fung, K

    2012-04-01

    (1) To investigate the efficacy of a computer-generated three-dimensional laryngeal model for laryngeal anatomy teaching; (2) to explore the relationship between students' spatial ability and acquisition of anatomical knowledge; and (3) to assess participants' opinion of the computerised model. Forty junior doctors were randomised to undertake laryngeal anatomy study supplemented by either a three-dimensional computer model or two-dimensional images. Outcome measurements comprised a laryngeal anatomy test, the modified Vandenberg and Kuse mental rotation test, and an opinion survey. Mean scores ± standard deviations for the anatomy test were 15.7 ± 2.0 for the 'three dimensions' group and 15.5 ± 2.3 for the 'standard' group (p = 0.7222). Pearson's correlation between the rotation test scores and the scores for the spatial ability questions in the anatomy test was 0.4791 (p = 0.086, n = 29). Opinion survey answers revealed significant differences in respondents' perceptions of the clarity and 'user friendliness' of, and their preferences for, the three-dimensional model as regards anatomical study. The three-dimensional computer model was equivalent to standard two-dimensional images, for the purpose of laryngeal anatomy teaching. There was no association between students' spatial ability and functional anatomy learning. However, students preferred to use the three-dimensional model.

  10. Computing Critical Properties with Yang-Yang Anomalies

    NASA Astrophysics Data System (ADS)

    Orkoulas, Gerassimos; Cerdeirina, Claudio; Fisher, Michael

    2017-01-01

    Computation of the thermodynamics of fluids in the critical region is a challenging task owing to divergence of the correlation length and lack of particle-hole symmetries found in Ising or lattice-gas models. In addition, analysis of experiments and simulations reveals a Yang-Yang (YY) anomaly which entails sharing of the specific heat singularity between the pressure and the chemical potential. The size of the YY anomaly is measured by the YY ratio Rμ =C μ /CV of the amplitudes of C μ = - T d2 μ /dT2 and of the total specific heat CV. A ``complete scaling'' theory, in which the pressure mixes into the scaling fields, accounts for the YY anomaly. In Phys. Rev. Lett. 116, 040601 (2016), compressible cell gas (CCG) models which exhibit YY and singular diameter anomalies, have been advanced for near-critical fluids. In such models, the individual cell volumes are allowed to fluctuate. The thermodynamics of CCGs can be computed through mapping onto the Ising model via the seldom-used great grand canonical ensemble. The computations indicate that local free volume fluctuations are the origins of the YY effects. Furthermore, local energy-volume coupling (to model water) is another crucial factor underlying the phenomena.

  11. Modeling the dynamics of chromosomal alteration progression in cervical cancer: A computational model

    PubMed Central

    2017-01-01

    Computational modeling has been applied to simulate the heterogeneity of cancer behavior. The development of Cervical Cancer (CC) is a process in which the cell acquires dynamic behavior from non-deleterious and deleterious mutations, exhibiting chromosomal alterations as a manifestation of this dynamic. To further determine the progression of chromosomal alterations in precursor lesions and CC, we introduce a computational model to study the dynamics of deleterious and non-deleterious mutations as an outcome of tumor progression. The analysis of chromosomal alterations mediated by our model reveals that multiple deleterious mutations are more frequent in precursor lesions than in CC. Cells with lethal deleterious mutations would be eliminated, which would mitigate cancer progression; on the other hand, cells with non-deleterious mutations would become dominant, which could predispose them to cancer progression. The study of somatic alterations through computer simulations of cancer progression provides a feasible pathway for insights into the transformation of cell mechanisms in humans. During cancer progression, tumors may acquire new phenotype traits, such as the ability to invade and metastasize or to become clinically important when they develop drug resistance. Non-deleterious chromosomal alterations contribute to this progression. PMID:28723940

  12. Computational Modeling of Inflammation and Wound Healing

    PubMed Central

    Ziraldo, Cordelia; Mi, Qi; An, Gary; Vodovotz, Yoram

    2013-01-01

    Objective Inflammation is both central to proper wound healing and a key driver of chronic tissue injury via a positive-feedback loop incited by incidental cell damage. We seek to derive actionable insights into the role of inflammation in wound healing in order to improve outcomes for individual patients. Approach To date, dynamic computational models have been used to study the time evolution of inflammation in wound healing. Emerging clinical data on histo-pathological and macroscopic images of evolving wounds, as well as noninvasive measures of blood flow, suggested the need for tissue-realistic, agent-based, and hybrid mechanistic computational simulations of inflammation and wound healing. Innovation We developed a computational modeling system, Simple Platform for Agent-based Representation of Knowledge, to facilitate the construction of tissue-realistic models. Results A hybrid equation–agent-based model (ABM) of pressure ulcer formation in both spinal cord-injured and -uninjured patients was used to identify control points that reduce stress caused by tissue ischemia/reperfusion. An ABM of arterial restenosis revealed new dynamics of cell migration during neointimal hyperplasia that match histological features, but contradict the currently prevailing mechanistic hypothesis. ABMs of vocal fold inflammation were used to predict inflammatory trajectories in individuals, possibly allowing for personalized treatment. Conclusions The intertwined inflammatory and wound healing responses can be modeled computationally to make predictions in individuals, simulate therapies, and gain mechanistic insights. PMID:24527362

  13. Prediction of Surface and pH-Specific Binding of Peptides to Metal and Oxide Nanoparticles

    NASA Astrophysics Data System (ADS)

    Heinz, Hendrik; Lin, Tzu-Jen; Emami, Fateme Sadat; Ramezani-Dakhel, Hadi; Naik, Rajesh; Knecht, Marc; Perry, Carole C.; Huang, Yu

    2015-03-01

    The mechanism of specific peptide adsorption onto metallic and oxidic nanostructures has been elucidated in atomic resolution using novel force fields and surface models in comparison to measurements. As an example, variations in peptide adsorption on Pd and Pt nanoparticles depending on shape, size, and location of peptides on specific bounding facets are explained. Accurate computational predictions of reaction rates in C-C coupling reactions using particle models derived from HE-XRD and PDF data illustrate the utility of computational methods for the rational design of new catalysts. On oxidic nanoparticles such as silica and apatites, it is revealed how changes in pH lead to similarity scores of attracted peptides lower than 20%, supported by appropriate model surfaces and data from adsorption isotherms. The results demonstrate how new computational methods can support the design of nanoparticle carriers for drug release and the understanding of calcification mechanisms in the human body.

  14. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.

  15. Finding your inner modeler: An NSF-sponsored workshop to introduce cell biologists to modeling/computational approaches.

    PubMed

    Stone, David E; Haswell, Elizabeth S; Sztul, Elizabeth

    2017-01-01

    In classical Cell Biology, fundamental cellular processes are revealed empirically, one experiment at a time. While this approach has been enormously fruitful, our understanding of cells is far from complete. In fact, the more we know, the more keenly we perceive our ignorance of the profoundly complex and dynamic molecular systems that underlie cell structure and function. Thus, it has become apparent to many cell biologists that experimentation alone is unlikely to yield major new paradigms, and that empiricism must be combined with theory and computational approaches to yield major new discoveries. To facilitate those discoveries, three workshops will convene annually for one day in three successive summers (2017-2019) to promote the use of computational modeling by cell biologists currently unconvinced of its utility or unsure how to apply it. The first of these workshops was held at the University of Illinois, Chicago in July 2017. Organized to facilitate interactions between traditional cell biologists and computational modelers, it provided a unique educational opportunity: a primer on how cell biologists with little or no relevant experience can incorporate computational modeling into their research. Here, we report on the workshop and describe how it addressed key issues that cell biologists face when considering modeling including: (1) Is my project appropriate for modeling? (2) What kind of data do I need to model my process? (3) How do I find a modeler to help me in integrating modeling approaches into my work? And, perhaps most importantly, (4) why should I bother?

  16. Geoscience in the Big Data Era: Are models obsolete?

    NASA Astrophysics Data System (ADS)

    Yuen, D. A.; Zheng, L.; Stark, P. B.; Morra, G.; Knepley, M.; Wang, X.

    2016-12-01

    In last few decades, the velocity, volume, and variety of geophysical data have increased, while the development of the Internet and distributed computing has led to the emergence of "data science." Fitting and running numerical models, especially based on PDEs, is the main consumer of flops in geoscience. Can large amounts of diverse data supplant modeling? Without the ability to conduct randomized, controlled experiments, causal inference requires understanding the physics. It is sometimes possible to predict well without understanding the system—if (1) the system is predictable, (2) data on "important" variables are available, and (3) the system changes slowly enough. And sometimes even a crude model can help the data "speak for themselves" much more clearly. For example, Shearer (1991) used a 1-dimensional velocity model to stack long-period seismograms, revealing upper mantle discontinuities. This was a "big data" approach: the main use of computing was in the data processing, rather than in modeling, yet the "signal" became clear. In contrast, modelers tend to use all available computing power to fit even more complex models, resulting in a cycle where uncertainty quantification (UQ) is never possible: even if realistic UQ required only 1,000 model evaluations, it is never in reach. To make more reliable inferences requires better data analysis and statistics, not more complex models. Geoscientists need to learn new skills and tools: sound software engineering practices; open programming languages suitable for big data; parallel and distributed computing; data visualization; and basic nonparametric, computationally based statistical inference, such as permutation tests. They should work reproducibly, scripting all analyses and avoiding point-and-click tools.

  17. Nasal conchae function as aerodynamic baffles: Experimental computational fluid dynamic analysis in a turkey nose (Aves: Galliformes).

    PubMed

    Bourke, Jason M; Witmer, Lawrence M

    2016-12-01

    We tested the aerodynamic function of nasal conchae in birds using CT data from an adult male wild turkey (Meleagris gallopavo) to construct 3D models of its nasal passage. A series of digital "turbinectomies" were performed on these models and computational fluid dynamic analyses were performed to simulate resting inspiration. Models with turbinates removed were compared to the original, unmodified control airway. Results revealed that the four conchae found in turkeys, along with the crista nasalis, alter the flow of inspired air in ways that can be considered baffle-like. However, these baffle-like functions were remarkably limited in their areal extent, indicating that avian conchae are more functionally independent than originally hypothesized. Our analysis revealed that the conchae of birds are efficient baffles that-along with potential heat and moisture transfer-serve to efficiently move air to specific regions of the nasal passage. This alternate function of conchae has implications for their evolution in birds and other amniotes. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Spontaneous Speech Events in Two Speech Databases of Human-Computer and Human-Human Dialogs in Spanish

    ERIC Educational Resources Information Center

    Rodriguez, Luis J.; Torres, M. Ines

    2006-01-01

    Previous works in English have revealed that disfluencies follow regular patterns and that incorporating them into the language model of a speech recognizer leads to lower perplexities and sometimes to a better performance. Although work on disfluency modeling has been applied outside the English community (e.g., in Japanese), as far as we know…

  19. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    PubMed

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  20. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  1. Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1991-01-01

    A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.

  2. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  3. Dissecting Embryonic Stem Cell Self-Renewal and Differentiation Commitment from Quantitative Models.

    PubMed

    Hu, Rong; Dai, Xianhua; Dai, Zhiming; Xiang, Qian; Cai, Yanning

    2016-10-01

    To model quantitatively embryonic stem cell (ESC) self-renewal and differentiation by computational approaches, we developed a unified mathematical model for gene expression involved in cell fate choices. Our quantitative model comprised ESC master regulators and lineage-specific pivotal genes. It took the factors of multiple pathways as input and computed expression as a function of intrinsic transcription factors, extrinsic cues, epigenetic modifications, and antagonism between ESC master regulators and lineage-specific pivotal genes. In the model, the differential equations of expression of genes involved in cell fate choices from regulation relationship were established according to the transcription and degradation rates. We applied this model to the Murine ESC self-renewal and differentiation commitment and found that it modeled the expression patterns with good accuracy. Our model analysis revealed that Murine ESC was an attractor state in culture and differentiation was predominantly caused by antagonism between ESC master regulators and lineage-specific pivotal genes. Moreover, antagonism among lineages played a critical role in lineage reprogramming. Our results also uncovered that the ordered expression alteration of ESC master regulators over time had a central role in ESC differentiation fates. Our computational framework was generally applicable to most cell-type maintenance and lineage reprogramming.

  4. Large-scale scour of the sea floor and the effect of natural armouring processes, land reclamation Maasvlakte 2, port of Rotterdam

    USGS Publications Warehouse

    Boer, S.; Elias, E.; Aarninkhof, S.; Roelvink, D.; Vellinga, T.

    2007-01-01

    Morphological model computations based on uniform (non-graded) sediment revealed an unrealistically strong scour of the sea floor in the immediate vicinity to the west of Maasvlakte 2. By means of a state-of-the-art graded sediment transport model the effect of natural armouring and sorting of bed material on the scour process has been examined. Sensitivity computations confirm that the development of the scour hole is strongly reduced due to the incorporation of armouring processes, suggesting an approximately 30% decrease in terms of erosion area below the -20m depth contour. ?? 2007 ASCE.

  5. Evaluation of synthetic linear motor-molecule actuation energetics

    PubMed Central

    Brough, Branden; Northrop, Brian H.; Schmidt, Jacob J.; Tseng, Hsian-Rong; Houk, Kendall N.; Stoddart, J. Fraser; Ho, Chih-Ming

    2006-01-01

    By applying atomic force microscope (AFM)-based force spectroscopy together with computational modeling in the form of molecular force-field simulations, we have determined quantitatively the actuation energetics of a synthetic motor-molecule. This multidisciplinary approach was performed on specifically designed, bistable, redox-controllable [2]rotaxanes to probe the steric and electrostatic interactions that dictate their mechanical switching at the single-molecule level. The fusion of experimental force spectroscopy and theoretical computational modeling has revealed that the repulsive electrostatic interaction, which is responsible for the molecular actuation, is as high as 65 kcal·mol−1, a result that is supported by ab initio calculations. PMID:16735470

  6. Multi-dimensional rheology-based two-phase model for sediment transport and applications to sheet flow and pipeline scour

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Cheng-Hsien; Department of Water Resources and Environmental Engineering, Tamkang University, New Taipei City 25137, Taiwan; Low, Ying Min, E-mail: ceelowym@nus.edu.sg

    2016-05-15

    Sediment transport is fundamentally a two-phase phenomenon involving fluid and sediments; however, many existing numerical models are one-phase approaches, which are unable to capture the complex fluid-particle and inter-particle interactions. In the last decade, two-phase models have gained traction; however, there are still many limitations in these models. For example, several existing two-phase models are confined to one-dimensional problems; in addition, the existing two-dimensional models simulate only the region outside the sand bed. This paper develops a new three-dimensional two-phase model for simulating sediment transport in the sheet flow condition, incorporating recently published rheological characteristics of sediments. The enduring-contact, inertial,more » and fluid viscosity effects are considered in determining sediment pressure and stresses, enabling the model to be applicable to a wide range of particle Reynolds number. A k − ε turbulence model is adopted to compute the Reynolds stresses. In addition, a novel numerical scheme is proposed, thus avoiding numerical instability caused by high sediment concentration and allowing the sediment dynamics to be computed both within and outside the sand bed. The present model is applied to two classical problems, namely, sheet flow and scour under a pipeline with favorable results. For sheet flow, the computed velocity is consistent with measured data reported in the literature. For pipeline scour, the computed scour rate beneath the pipeline agrees with previous experimental observations. However, the present model is unable to capture vortex shedding; consequently, the sediment deposition behind the pipeline is overestimated. Sensitivity analyses reveal that model parameters associated with turbulence have strong influence on the computed results.« less

  7. Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris

    2015-11-01

    Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network and potentially contributes to development of improved therapy for neurological disorders such as Parkinson's disease.

  8. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  9. Models of protein–ligand crystal structures: trust, but verify

    PubMed Central

    Deller, Marc C.

    2015-01-01

    X-ray crystallography provides the most accurate models of protein–ligand structures. These models serve as the foundation of many computational methods including structure prediction, molecular modelling, and structure-based drug design. The success of these computational methods ultimately depends on the quality of the underlying protein–ligand models. X-ray crystallography offers the unparalleled advantage of a clear mathematical formalism relating the experimental data to the protein–ligand model. In the case of X-ray crystallography, the primary experimental evidence is the electron density of the molecules forming the crystal. The first step in the generation of an accurate and precise crystallographic model is the interpretation of the electron density of the crystal, typically carried out by construction of an atomic model. The atomic model must then be validated for fit to the experimental electron density and also for agreement with prior expectations of stereochemistry. Stringent validation of protein–ligand models has become possible as a result of the mandatory deposition of primary diffraction data, and many computational tools are now available to aid in the validation process. Validation of protein–ligand complexes has revealed some instances of overenthusiastic interpretation of ligand density. Fundamental concepts and metrics of protein–ligand quality validation are discussed and we highlight software tools to assist in this process. It is essential that end users select high quality protein–ligand models for their computational and biological studies, and we provide an overview of how this can be achieved. PMID:25665575

  10. Models of protein-ligand crystal structures: trust, but verify.

    PubMed

    Deller, Marc C; Rupp, Bernhard

    2015-09-01

    X-ray crystallography provides the most accurate models of protein-ligand structures. These models serve as the foundation of many computational methods including structure prediction, molecular modelling, and structure-based drug design. The success of these computational methods ultimately depends on the quality of the underlying protein-ligand models. X-ray crystallography offers the unparalleled advantage of a clear mathematical formalism relating the experimental data to the protein-ligand model. In the case of X-ray crystallography, the primary experimental evidence is the electron density of the molecules forming the crystal. The first step in the generation of an accurate and precise crystallographic model is the interpretation of the electron density of the crystal, typically carried out by construction of an atomic model. The atomic model must then be validated for fit to the experimental electron density and also for agreement with prior expectations of stereochemistry. Stringent validation of protein-ligand models has become possible as a result of the mandatory deposition of primary diffraction data, and many computational tools are now available to aid in the validation process. Validation of protein-ligand complexes has revealed some instances of overenthusiastic interpretation of ligand density. Fundamental concepts and metrics of protein-ligand quality validation are discussed and we highlight software tools to assist in this process. It is essential that end users select high quality protein-ligand models for their computational and biological studies, and we provide an overview of how this can be achieved.

  11. Combined changes in Wnt signaling response and contact inhibition induce altered proliferation in radiation-treated intestinal crypts

    PubMed Central

    Dunn, S.-J.; Osborne, J. M.; Appleton, P. L.; Näthke, I.

    2016-01-01

    Curative intervention is possible if colorectal cancer is identified early, underscoring the need to detect the earliest stages of malignant transformation. A candidate biomarker is the expanded proliferative zone observed in crypts before adenoma formation, also found in irradiated crypts. However, the underlying driving mechanism for this is not known. Wnt signaling is a key regulator of proliferation, and elevated Wnt signaling is implicated in cancer. Nonetheless, how cells differentiate Wnt signals of varying strengths is not understood. We use computational modeling to compare alternative hypotheses about how Wnt signaling and contact inhibition affect proliferation. Direct comparison of simulations with published experimental data revealed that the model that best reproduces proliferation patterns in normal crypts stipulates that proliferative fate and cell cycle duration are set by the Wnt stimulus experienced at birth. The model also showed that the broadened proliferation zone induced by tumorigenic radiation can be attributed to cells responding to lower Wnt concentrations and dividing at smaller volumes. Application of the model to data from irradiated crypts after an extended recovery period permitted deductions about the extent of the initial insult. Application of computational modeling to experimental data revealed how mechanisms that control cell dynamics are altered at the earliest stages of carcinogenesis. PMID:27053661

  12. Predicting the optimal geometry of microneedles and their array for dermal vaccination using a computational model.

    PubMed

    Römgens, Anne M; Bader, Dan L; Bouwstra, Joke A; Oomens, Cees W J

    2016-11-01

    Microneedle arrays have been developed to deliver a range of biomolecules including vaccines into the skin. These microneedles have been designed with a wide range of geometries and arrangements within an array. However, little is known about the effect of the geometry on the potency of the induced immune response. The aim of this study was to develop a computational model to predict the optimal design of the microneedles and their arrangement within an array. The three-dimensional finite element model described the diffusion and kinetics in the skin following antigen delivery with a microneedle array. The results revealed an optimum distance between microneedles based on the number of activated antigen presenting cells, which was assumed to be related to the induced immune response. This optimum depends on the delivered dose. In addition, the microneedle length affects the number of cells that will be involved in either the epidermis or dermis. By contrast, the radius at the base of the microneedle and release rate only minimally influenced the number of cells that were activated. The model revealed the importance of various geometric parameters to enhance the induced immune response. The model can be developed further to determine the optimal design of an array by adjusting its various parameters to a specific situation.

  13. Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error

    NASA Technical Reports Server (NTRS)

    Byrne, M. D.; Kirlik, Alex

    2003-01-01

    We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.

  14. Evaluation of the capability of local helioseismology to discern between monolithic and spaghetti sunspot models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felipe, T.; Crouch, A. D.; Birch, A. C., E-mail: tobias@nwra.com

    2014-06-20

    The helioseismic properties of the wave scattering generated by monolithic and spaghetti sunspots are analyzed by means of numerical simulations. In these computations, an incident f- or p {sub 1}-mode travels through the sunspot model, which produces absorption and phase shift of the waves. The scattering is studied by inspecting the wavefield, computing travel-time shifts, and performing Fourier-Hankel analysis. The comparison between the results obtained for both sunspot models reveals that the differences in the absorption coefficient can be detected above noise level. The spaghetti model produces a steep increase of the phase shift with the degree of the modemore » at short wavelengths, while mode mixing is more efficient for the monolithic model. These results provide a clue for what to look for in solar observations to discern the constitution of sunspots between the proposed monolithic and spaghetti models.« less

  15. Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package

    PubMed Central

    Ahn, Woo-Young; Haines, Nathaniel; Zhang, Lei

    2017-01-01

    Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric patients, and there is growing enthusiasm for both RLDM and computational psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes, as exemplified by model-based analysis of data from functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). However, researchers often find the approach too technical and have difficulty adopting it for their research. Thus, a critical need remains to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks), which offers computational modeling of an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, in which both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, the package is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons, each with a single line of coding. Users can also extract the trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational-modeling approaches to investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, we expect that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within different populations. PMID:29601060

  16. Learning-based computing techniques in geoid modeling for precise height transformation

    NASA Astrophysics Data System (ADS)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  17. The Antarctic Ice.

    ERIC Educational Resources Information Center

    Radok, Uwe

    1985-01-01

    The International Antarctic Glaciological Project has collected information on the East Antarctic ice sheet since 1969. Analysis of ice cores revealed climatic history, and radar soundings helped map bedrock of the continent. Computer models of the ice sheet and its changes over time will aid in predicting the future. (DH)

  18. A combined three-dimensional in vitro–in silico approach to modelling bubble dynamics in decompression sickness

    PubMed Central

    Stride, E.; Cheema, U.

    2017-01-01

    The growth of bubbles within the body is widely believed to be the cause of decompression sickness (DCS). Dive computer algorithms that aim to prevent DCS by mathematically modelling bubble dynamics and tissue gas kinetics are challenging to validate. This is due to lack of understanding regarding the mechanism(s) leading from bubble formation to DCS. In this work, a biomimetic in vitro tissue phantom and a three-dimensional computational model, comprising a hyperelastic strain-energy density function to model tissue elasticity, were combined to investigate key areas of bubble dynamics. A sensitivity analysis indicated that the diffusion coefficient was the most influential material parameter. Comparison of computational and experimental data revealed the bubble surface's diffusion coefficient to be 30 times smaller than that in the bulk tissue and dependent on the bubble's surface area. The initial size, size distribution and proximity of bubbles within the tissue phantom were also shown to influence their subsequent dynamics highlighting the importance of modelling bubble nucleation and bubble–bubble interactions in order to develop more accurate dive algorithms. PMID:29263127

  19. Selective updating of working memory content modulates meso-cortico-striatal activity.

    PubMed

    Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S

    2011-08-01

    Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.

  20. Relationship of the interplanetary electric field to the high-latitude ionospheric electric field and currents Observations and model simulation

    NASA Technical Reports Server (NTRS)

    Clauer, C. R.; Banks, P. M.

    1986-01-01

    The electrical coupling between the solar wind, magnetosphere, and ionosphere is studied. The coupling is analyzed using observations of high-latitude ion convection measured by the Sondre Stromfjord radar in Greenland and a computer simulation. The computer simulation calculates the ionospheric electric potential distribution for a given configuration of field-aligned currents and conductivity distribution. The technique for measuring F-region in velocities at high time resolution over a large range of latitudes is described. Variations in the currents on ionospheric plasma convection are examined using a model of field-aligned currents linking the solar wind with the dayside, high-latitude ionosphere. The data reveal that high-latitude ionospheric convection patterns, electric fields, and field-aligned currents are dependent on IMF orientation; it is observed that the electric field, which drives the F-region plasma curve, responds within about 14 minutes to IMF variations in the magnetopause. Comparisons of the simulated plasma convection with the ion velocity measurements reveal good correlation between the data.

  1. Sentence-Based Attentional Mechanisms in Word Learning: Evidence from a Computational Model

    PubMed Central

    Alishahi, Afra; Fazly, Afsaneh; Koehne, Judith; Crocker, Matthew W.

    2012-01-01

    When looking for the referents of novel nouns, adults and young children are sensitive to cross-situational statistics (Yu and Smith, 2007; Smith and Yu, 2008). In addition, the linguistic context that a word appears in has been shown to act as a powerful attention mechanism for guiding sentence processing and word learning (Landau and Gleitman, 1985; Altmann and Kamide, 1999; Kako and Trueswell, 2000). Koehne and Crocker (2010, 2011) investigate the interaction between cross-situational evidence and guidance from the sentential context in an adult language learning scenario. Their studies reveal that these learning mechanisms interact in a complex manner: they can be used in a complementary way when context helps reduce referential uncertainty; they influence word learning about equally strongly when cross-situational and contextual evidence are in conflict; and contextual cues block aspects of cross-situational learning when both mechanisms are independently applicable. To address this complex pattern of findings, we present a probabilistic computational model of word learning which extends a previous cross-situational model (Fazly et al., 2010) with an attention mechanism based on sentential cues. Our model uses a framework that seamlessly combines the two sources of evidence in order to study their emerging pattern of interaction during the process of word learning. Simulations of the experiments of (Koehne and Crocker, 2010, 2011) reveal an overall pattern of results that are in line with their findings. Importantly, we demonstrate that our model does not need to explicitly assign priority to either source of evidence in order to produce these results: learning patterns emerge as a result of a probabilistic interaction between the two clue types. Moreover, using a computational model allows us to examine the developmental trajectory of the differential roles of cross-situational and sentential cues in word learning. PMID:22783211

  2. Methods of comparing associative models and an application to retrospective revaluation.

    PubMed

    Witnauer, James E; Hutchings, Ryan; Miller, Ralph R

    2017-11-01

    Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Combining Feature Selection and Integration—A Neural Model for MT Motion Selectivity

    PubMed Central

    Beck, Cornelia; Neumann, Heiko

    2011-01-01

    Background The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance We propose a new neural model for MT pattern computation and motion disambiguation that is based on a combination of feature selection and integration. The model can explain a range of recent neurophysiological findings including temporally dynamic behaviour. PMID:21814543

  4. Computational Investigation of Fluidic Counterflow Thrust Vectoring

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.; Deere, Karen A.

    1999-01-01

    A computational study of fluidic counterflow thrust vectoring has been conducted. Two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. For validation, computational results were compared to experimental data obtained at the NASA Langley Jet Exit Test Facility. In general, computational results were in good agreement with experimental performance data, indicating that efficient thrust vectoring can be obtained with low secondary flow requirements (less than 1% of the primary flow). An examination of the computational flowfield has revealed new details about the generation of a countercurrent shear layer, its relation to secondary suction, and its role in thrust vectoring. In addition to providing new information about the physics of counterflow thrust vectoring, this work appears to be the first documented attempt to simulate the counterflow thrust vectoring problem using computational fluid dynamics.

  5. Infection Threshold for an Epidemic Model in Site and Bond Percolation Worlds

    NASA Astrophysics Data System (ADS)

    Sakisaka, Yukio; Yoshimura, Jin; Takeuchi, Yasuhiro; Sugiura, Koji; Tainaka, Kei-ichi

    2010-02-01

    We investigate an epidemic model on a square lattice with two protection treatments: prevention and quarantine. To explore the effects of both treatments, we apply the site and bond percolations. Computer simulations reveal that the threshold between endemic and disease-free phases can be represented by a single scaling law. The mean-field theory qualitatively predicts such infection dynamics and the scaling law.

  6. Transitional circuitry for studying the properties of DNA

    NASA Astrophysics Data System (ADS)

    Trubochkina, N.

    2018-01-01

    The article is devoted to a new view of the structure of DNA as an intellectual scheme possessing the properties of logic and memory. The theory of transient circuitry, developed by the author for optimal computer circuits, revealed an amazing structural similarity between mathematical models of transition silicon elements and logic and memory circuits of solid state transient circuitry and atomic models of parts of DNA.

  7. Chess games: a model for RNA based computation.

    PubMed

    Cukras, A R; Faulhammer, D; Lipton, R J; Landweber, L F

    1999-10-01

    Here we develop the theory of RNA computing and a method for solving the 'knight problem' as an instance of a satisfiability (SAT) problem. Using only biological molecules and enzymes as tools, we developed an algorithm for solving the knight problem (3 x 3 chess board) using a 10-bit combinatorial pool and sequential RNase H digestions. The results of preliminary experiments presented here reveal that the protocol recovers far more correct solutions than expected at random, but the persistence of errors still presents the greatest challenge.

  8. Cellular Automata and the Humanities.

    ERIC Educational Resources Information Center

    Gallo, Ernest

    1994-01-01

    The use of cellular automata to analyze several pre-Socratic hypotheses about the evolution of the physical world is discussed. These hypotheses combine characteristics of both rigorous and metaphoric language. Since the computer demands explicit instructions for each step in the evolution of the automaton, such models can reveal conceptual…

  9. Optimizing Classroom Acoustics Using Computer Model Studies.

    ERIC Educational Resources Information Center

    Reich, Rebecca; Bradley, John

    1998-01-01

    Investigates conditions relating to the maximum useful-to-detrimental sound ratios present in classrooms and determining the optimum conditions for speech intelligibility. Reveals that speech intelligibility is more strongly influenced by ambient noise levels and that the optimal location for sound absorbing material is on a classroom's upper…

  10. Wheat forecast economics effect study. [value of improved information on crop inventories, production, imports and exports

    NASA Technical Reports Server (NTRS)

    Mehra, R. K.; Rouhani, R.; Jones, S.; Schick, I.

    1980-01-01

    A model to assess the value of improved information regarding the inventories, productions, exports, and imports of crop on a worldwide basis is discussed. A previously proposed model is interpreted in a stochastic control setting and the underlying assumptions of the model are revealed. In solving the stochastic optimization problem, the Markov programming approach is much more powerful and exact as compared to the dynamic programming-simulation approach of the original model. The convergence of a dual variable Markov programming algorithm is shown to be fast and efficient. A computer program for the general model of multicountry-multiperiod is developed. As an example, the case of one country-two periods is treated and the results are presented in detail. A comparison with the original model results reveals certain interesting aspects of the algorithms and the dependence of the value of information on the incremental cost function.

  11. A computational model of the respiratory network challenged and optimized by data from optogenetic manipulation of glycinergic neurons.

    PubMed

    Oku, Yoshitaka; Hülsmann, Swen

    2017-04-07

    The topology of the respiratory network in the brainstem has been addressed using different computational models, which help to understand the functional properties of the system. We tested a neural mass model by comparing the result of activation and inhibition of inhibitory neurons in silico with recently published results of optogenetic manipulation of glycinergic neurons [Sherman, et al. (2015) Nat Neurosci 18:408]. The comparison revealed that a five-cell type model consisting of three classes of inhibitory neurons [I-DEC, E-AUG, E-DEC (PI)] and two excitatory populations (pre-I/I) and (I-AUG) neurons can be applied to explain experimental observations made by stimulating or inhibiting inhibitory neurons by light sensitive ion channels. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Simulating physiological interactions in a hybrid system of mathematical models.

    PubMed

    Kretschmer, Jörn; Haunsberger, Thomas; Drost, Erick; Koch, Edmund; Möller, Knut

    2014-12-01

    Mathematical models can be deployed to simulate physiological processes of the human organism. Exploiting these simulations, reactions of a patient to changes in the therapy regime can be predicted. Based on these predictions, medical decision support systems (MDSS) can help in optimizing medical therapy. An MDSS designed to support mechanical ventilation in critically ill patients should not only consider respiratory mechanics but should also consider other systems of the human organism such as gas exchange or blood circulation. A specially designed framework allows combining three model families (respiratory mechanics, cardiovascular dynamics and gas exchange) to predict the outcome of a therapy setting. Elements of the three model families are dynamically combined to form a complex model system with interacting submodels. Tests revealed that complex model combinations are not computationally feasible. In most patients, cardiovascular physiology could be simulated by simplified models decreasing computational costs. Thus, a simplified cardiovascular model that is able to reproduce basic physiological behavior is introduced. This model purely consists of difference equations and does not require special algorithms to be solved numerically. The model is based on a beat-to-beat model which has been extended to react to intrathoracic pressure levels that are present during mechanical ventilation. The introduced reaction to intrathoracic pressure levels as found during mechanical ventilation has been tuned to mimic the behavior of a complex 19-compartment model. Tests revealed that the model is able to represent general system behavior comparable to the 19-compartment model closely. Blood pressures were calculated with a maximum deviation of 1.8 % in systolic pressure and 3.5 % in diastolic pressure, leading to a simulation error of 0.3 % in cardiac output. The gas exchange submodel being reactive to changes in cardiac output showed a resulting deviation of less than 0.1 %. Therefore, the proposed model is usable in combinations where cardiovascular simulation does not have to be detailed. Computing costs have been decreased dramatically by a factor 186 compared to a model combination employing the 19-compartment model.

  13. MESOSCOPIC MODELING OF STOCHASTIC REACTION-DIFFUSION KINETICS IN THE SUBDIFFUSIVE REGIME

    PubMed Central

    BLANC, EMILIE; ENGBLOM, STEFAN; HELLANDER, ANDREAS; LÖTSTEDT, PER

    2017-01-01

    Subdiffusion has been proposed as an explanation of various kinetic phenomena inside living cells. In order to fascilitate large-scale computational studies of subdiffusive chemical processes, we extend a recently suggested mesoscopic model of subdiffusion into an accurate and consistent reaction-subdiffusion computational framework. Two different possible models of chemical reaction are revealed and some basic dynamic properties are derived. In certain cases those mesoscopic models have a direct interpretation at the macroscopic level as fractional partial differential equations in a bounded time interval. Through analysis and numerical experiments we estimate the macroscopic effects of reactions under subdiffusive mixing. The models display properties observed also in experiments: for a short time interval the behavior of the diffusion and the reaction is ordinary, in an intermediate interval the behavior is anomalous, and at long times the behavior is ordinary again. PMID:29046618

  14. Computational Models of Consumer Confidence from Large-Scale Online Attention Data: Crowd-Sourcing Econometrics

    PubMed Central

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting. PMID:25826692

  15. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    PubMed

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  16. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  17. Toward Theory-Based Instruction in Scientific Problem Solving.

    ERIC Educational Resources Information Center

    Heller, Joan I.; And Others

    Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…

  18. Inferential Procedures for Correlation Coefficients Corrected for Attenuation.

    ERIC Educational Resources Information Center

    Hakstian, A. Ralph; And Others

    1988-01-01

    A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)

  19. The Dynamics of Information Search Services.

    ERIC Educational Resources Information Center

    Lindquist, Mats G.

    Computer-based information search services (ISSs) of the type that provide online literature searches are analyzed from a systems viewpoint using a continuous simulation model. The methodology applied is "system dynamics," and the system language is DYNAMO. The analysis reveals that the observed growth and stagnation of a typical ISS can…

  20. The Mundrabilla Meteorite in Three-Dimensions

    NASA Technical Reports Server (NTRS)

    Gillies, D. C.; Carpenter, P. K.; Engel, H. P.

    2003-01-01

    Computed tomography (CT) using gamma radiation has revealed the interior structure of the anomalous iron meteorite, Mundrabilla. This meteorite is composed of 25 volume percent of iron sulfide with the remainder being iron-nickel. Both phases have been shown to be contiguous, and three dimensional models have been constructed using rapid prototyping techniques.

  1. Effect of section shape on frequencies of natural oscillations of tubular springs

    NASA Astrophysics Data System (ADS)

    Pirogov, S. P.; Chuba, A. Yu; Cherentsov, D. A.

    2018-05-01

    The necessity of determining the frequencies of natural oscillations of manometric tubular springs is substantiated. Based on the mathematical model and computer program, numerical experiments were performed that allowed us to reveal the effect of geometric parameters on the frequencies of free oscillations of manometric tubular springs.

  2. Octree-based Global Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Ramirez-Guzman, L.; Juarez, A.; Bielak, J.; Salazar Monroy, E. F.

    2017-12-01

    Seismological research has motivated recent efforts to construct more accurate three-dimensional (3D) velocity models of the Earth, perform global simulations of wave propagation to validate models, and also to study the interaction of seismic fields with 3D structures. However, traditional methods for seismogram computation at global scales are limited by computational resources, relying primarily on traditional methods such as normal mode summation or two-dimensional numerical methods. We present an octree-based mesh finite element implementation to perform global earthquake simulations with 3D models using topography and bathymetry with a staircase approximation, as modeled by the Carnegie Mellon Finite Element Toolchain Hercules (Tu et al., 2006). To verify the implementation, we compared the synthetic seismograms computed in a spherical earth against waveforms calculated using normal mode summation for the Preliminary Earth Model (PREM) for a point source representation of the 2014 Mw 7.3 Papanoa, Mexico earthquake. We considered a 3 km-thick ocean layer for stations with predominantly oceanic paths. Eigen frequencies and eigen functions were computed for toroidal, radial, and spherical oscillations in the first 20 branches. Simulations are valid at frequencies up to 0.05 Hz. Matching among the waveforms computed by both approaches, especially for long period surface waves, is excellent. Additionally, we modeled the Mw 9.0 Tohoku-Oki earthquake using the USGS finite fault inversion. Topography and bathymetry from ETOPO1 are included in a mesh with more than 3 billion elements; constrained by the computational resources available. We compared estimated velocity and GPS synthetics against observations at regional and teleseismic stations of the Global Seismological Network and discuss the differences among observations and synthetics, revealing that heterogeneity, particularly in the crust, needs to be considered.

  3. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  4. Information technology infusion model for health sector in a developing country: Nigeria as a case.

    PubMed

    Idowu, Bayo; Adagunodo, Rotimi; Adedoyin, Rufus

    2006-01-01

    To date, information technology (IT) has not been widely adopted in the health sector in the developing countries. Information Technology may bring an improvement on health care delivery systems. It is one of the prime movers of globalization. Information technology infusion is the degree to which a different information technology tools are integrated into organizational activities. This study aimed to know the degree and the extent of incorporation of Information Technology in the Nigerian health sector and derive an IT infusion models for popular IT indicators that are in use in Nigeria (Personal computers, Mobile phones, and the Internet) and subsequently investigates their impacts on the health care delivery system in Nigerian teaching hospitals. In this study, data were collected through the use of questionnaires. Also, oral interviews were conducted and subsequently, the data gathered were analyzed. The results of the analysis revealed that out of the three IT indicators considered, mobile phones are spreading fastest. It also revealed that computers and mobile phones are in use in all the teaching hospitals. Finally in this research, IT infusion models were developed for health sector in Nigeria from the data gathered through the questionnaire and oral interview.

  5. Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil

    PubMed Central

    Gao, Zhongxing; Zhang, Yonggang; Zhang, Yunhao

    2016-01-01

    Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance. PMID:27455257

  6. Quantitative computational infrared imaging of buoyant diffusion flames

    NASA Astrophysics Data System (ADS)

    Newale, Ashish S.

    Studies of infrared radiation from turbulent buoyant diffusion flames impinging on structural elements have applications to the development of fire models. A numerical and experimental study of radiation from buoyant diffusion flames with and without impingement on a flat plate is reported. Quantitative images of the radiation intensity from the flames are acquired using a high speed infrared camera. Large eddy simulations are performed using fire dynamics simulator (FDS version 6). The species concentrations and temperature from the simulations are used in conjunction with a narrow-band radiation model (RADCAL) to solve the radiative transfer equation. The computed infrared radiation intensities rendered in the form of images and compared with the measurements. The measured and computed radiation intensities reveal necking and bulging with a characteristic frequency of 7.1 Hz which is in agreement with previous empirical correlations. The results demonstrate the effects of stagnation point boundary layer on the upstream buoyant shear layer. The coupling between these two shear layers presents a model problem for sub-grid scale modeling necessary for future large eddy simulations.

  7. Multifractal analysis of information processing in hippocampal neural ensembles during working memory under Δ9-tetrahydrocannabinol administration

    PubMed Central

    Fetterhoff, Dustin; Opris, Ioan; Simpson, Sean L.; Deadwyler, Sam A.; Hampson, Robert E.; Kraft, Robert A.

    2014-01-01

    Background Multifractal analysis quantifies the time-scale-invariant properties in data by describing the structure of variability over time. By applying this analysis to hippocampal interspike interval sequences recorded during performance of a working memory task, a measure of long-range temporal correlations and multifractal dynamics can reveal single neuron correlates of information processing. New method Wavelet leaders-based multifractal analysis (WLMA) was applied to hippocampal interspike intervals recorded during a working memory task. WLMA can be used to identify neurons likely to exhibit information processing relevant to operation of brain–computer interfaces and nonlinear neuronal models. Results Neurons involved in memory processing (“Functional Cell Types” or FCTs) showed a greater degree of multifractal firing properties than neurons without task-relevant firing characteristics. In addition, previously unidentified FCTs were revealed because multifractal analysis suggested further functional classification. The cannabinoid-type 1 receptor partial agonist, tetrahydrocannabinol (THC), selectively reduced multifractal dynamics in FCT neurons compared to non-FCT neurons. Comparison with existing methods WLMA is an objective tool for quantifying the memory-correlated complexity represented by FCTs that reveals additional information compared to classification of FCTs using traditional z-scores to identify neuronal correlates of behavioral events. Conclusion z-Score-based FCT classification provides limited information about the dynamical range of neuronal activity characterized by WLMA. Increased complexity, as measured with multifractal analysis, may be a marker of functional involvement in memory processing. The level of multifractal attributes can be used to differentially emphasize neural signals to improve computational models and algorithms underlying brain–computer interfaces. PMID:25086297

  8. Learning of spatial relationships between observed and imitated actions allows invariant inverse computation in the frontal mirror neuron system.

    PubMed

    Oh, Hyuk; Gentili, Rodolphe J; Reggia, James A; Contreras-Vidal, José L

    2011-01-01

    It has been suggested that the human mirror neuron system can facilitate learning by imitation through coupling of observation and action execution. During imitation of observed actions, the functional relationship between and within the inferior frontal cortex, the posterior parietal cortex, and the superior temporal sulcus can be modeled within the internal model framework. The proposed biologically plausible mirror neuron system model extends currently available models by explicitly modeling the intraparietal sulcus and the superior parietal lobule in implementing the function of a frame of reference transformation during imitation. Moreover, the model posits the ventral premotor cortex as performing an inverse computation. The simulations reveal that: i) the transformation system can learn and represent the changes in extrinsic to intrinsic coordinates when an imitator observes a demonstrator; ii) the inverse model of the imitator's frontal mirror neuron system can be trained to provide the motor plans for the imitated actions.

  9. Ratio of produced gas to produced water from DOE's EDNA Delcambre No. 1 geopressured-geothermal aquifer gas well test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, L.A.; Randolph, P.L.

    1979-01-01

    A paper presented by the Institute of Gas Technology (IGT) at the Third Geopressured-Geothermal Energy Conference hypothesized that the high ratio of produced gas to produced water from the No. 1 sand in the Edna Delcambre No. 1 well was due to free gas trapped in pores by imbibition over geological time. This hypothesis was examined in relation to preliminary test data which reported only average gas to water ratios over the roughly 2-day steps in flow rate. Subsequent public release of detailed test data revealed substantial departures from the previously reported computer simulation results. Also, data now in themore » public domain reveal the existence of a gas cap on the aquifier tested. This paper describes IGT's efforts to match the observed gas/water production with computer simulation. Two models for the occurrence and production of gas in excess of that dissolved in the brine have been used. One model considers the gas to be dispersed in pores by imbibition, and the other model considers the gas as a nearby free gas cap above the aquifier. The studies revealed that the dispersed gas model characteristically gave the wrong shape to plots of gas production on the gas/water ratio plots such that no reasonable match to the flow data could be achieved. The free gas cap model gave a characteristically better shape to the production plots and could provide an approximate fit to the data of the edge of the free gas cap is only about 400 feet from the well.Because the geological structure maps indicate the free gas cap to be several thousand feet away and the computer simulation results match the distance to the nearby Delcambre Nos. 4 and 4A wells, it appears that the source of the excess free gas in the test of the No. 1 sand may be from these nearby wells. The gas source is probably a separate gas zone and is brought into contact with the No. 1 sand via a conduit around the No. 4 well.« less

  10. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model.

    PubMed

    Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of the dynamic interplay between reward, dopamine, and associative memory formation. Our results also underline the importance of considering individual traits when assessing reward-related influences on memory.

  11. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    NASA Astrophysics Data System (ADS)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both modalities, with students who built models not incorporating slippage explanations in responses. Study 3 compares these modalities with a control using traditional activities. Pre and posttests reveal that the two modalities manifested greater facility with accessing and assembling rules than the control. The dissertation offers implications for the design of learning environments for evolutionary change, design of the two modalities based on their strengths and weaknesses, and teacher training for the same.

  12. How should a speech recognizer work?

    PubMed

    Scharenborg, Odette; Norris, Dennis; Bosch, Louis; McQueen, James M

    2005-11-12

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input. 2005 Lawrence Erlbaum Associates, Inc.

  13. Investigation of different modeling approaches for computational fluid dynamics simulation of high-pressure rocket combustors

    NASA Astrophysics Data System (ADS)

    Ivancic, B.; Riedmann, H.; Frey, M.; Knab, O.; Karl, S.; Hannemann, K.

    2016-07-01

    The paper summarizes technical results and first highlights of the cooperation between DLR and Airbus Defence and Space (DS) within the work package "CFD Modeling of Combustion Chamber Processes" conducted in the frame of the Propulsion 2020 Project. Within the addressed work package, DLR Göttingen and Airbus DS Ottobrunn have identified several test cases where adequate test data are available and which can be used for proper validation of the computational fluid dynamics (CFD) tools. In this paper, the first test case, the Penn State chamber (RCM1), is discussed. Presenting the simulation results from three different tools, it is shown that the test case can be computed properly with steady-state Reynolds-averaged Navier-Stokes (RANS) approaches. The achieved simulation results reproduce the measured wall heat flux as an important validation parameter very well but also reveal some inconsistencies in the test data which are addressed in this paper.

  14. Computational Simulation of Thermal and Spattering Phenomena and Microstructure in Selective Laser Melting of Inconel 625

    NASA Astrophysics Data System (ADS)

    Özel, Tuğrul; Arısoy, Yiğit M.; Criales, Luis E.

    Computational modelling of Laser Powder Bed Fusion (L-PBF) processes such as Selective laser Melting (SLM) can reveal information that is hard to obtain or unobtainable by in-situ experimental measurements. A 3D thermal field that is not visible by the thermal camera can be obtained by solving the 3D heat transfer problem. Furthermore, microstructural modelling can be used to predict the quality and mechanical properties of the product. In this paper, a nonlinear 3D Finite Element Method based computational code is developed to simulate the SLM process with different process parameters such as laser power and scan velocity. The code is further improved by utilizing an in-situ thermal camera recording to predict spattering which is in turn included as a stochastic heat loss. Then, thermal gradients extracted from the simulations applied to predict growth directions in the resulting microstructure.

  15. Intervertebral reaction force prediction using an enhanced assembly of OpenSim models.

    PubMed

    Senteler, Marco; Weisse, Bernhard; Rothenfluh, Dominique A; Snedeker, Jess G

    2016-01-01

    OpenSim offers a valuable approach to investigating otherwise difficult to assess yet important biomechanical parameters such as joint reaction forces. Although the range of available models in the public repository is continually increasing, there currently exists no OpenSim model for the computation of intervertebral joint reactions during flexion and lifting tasks. The current work combines and improves elements of existing models to develop an enhanced model of the upper body and lumbar spine. Models of the upper body with extremities, neck and head were combined with an improved version of a lumbar spine from the model repository. Translational motion was enabled for each lumbar vertebrae with six controllable degrees of freedom. Motion segment stiffness was implemented at lumbar levels and mass properties were assigned throughout the model. Moreover, body coordinate frames of the spine were modified to allow straightforward variation of sagittal alignment and to simplify interpretation of results. Evaluation of model predictions for level L1-L2, L3-L4 and L4-L5 in various postures of forward flexion and moderate lifting (8 kg) revealed an agreement within 10% to experimental studies and model-based computational analyses. However, in an extended posture or during lifting of heavier loads (20 kg), computed joint reactions differed substantially from reported in vivo measures using instrumented implants. We conclude that agreement between the model and available experimental data was good in view of limitations of both the model and the validation datasets. The presented model is useful in that it permits computation of realistic lumbar spine joint reaction forces during flexion and moderate lifting tasks. The model and corresponding documentation are now available in the online OpenSim repository.

  16. Tertiary structure-based analysis of microRNA–target interactions

    PubMed Central

    Gan, Hin Hark; Gunsalus, Kristin C.

    2013-01-01

    Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009

  17. Inference of sigma factor controlled networks by using numerical modeling applied to microarray time series data of the germinating prokaryote.

    PubMed

    Strakova, Eva; Zikova, Alice; Vohradsky, Jiri

    2014-01-01

    A computational model of gene expression was applied to a novel test set of microarray time series measurements to reveal regulatory interactions between transcriptional regulators represented by 45 sigma factors and the genes expressed during germination of a prokaryote Streptomyces coelicolor. Using microarrays, the first 5.5 h of the process was recorded in 13 time points, which provided a database of gene expression time series on genome-wide scale. The computational modeling of the kinetic relations between the sigma factors, individual genes and genes clustered according to the similarity of their expression kinetics identified kinetically plausible sigma factor-controlled networks. Using genome sequence annotations, functional groups of genes that were predominantly controlled by specific sigma factors were identified. Using external binding data complementing the modeling approach, specific genes involved in the control of the studied process were identified and their function suggested.

  18. Network-based stochastic semisupervised learning.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2012-03-01

    Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.

  19. Integrative Systems Models of Cardiac Excitation Contraction Coupling

    PubMed Central

    Greenstein, Joseph L.; Winslow, Raimond L.

    2010-01-01

    Excitation-contraction coupling in the cardiac myocyte is mediated by a number of highly integrated mechanisms of intracellular Ca2+ transport. The complexity and integrative nature of heart cell electrophysiology and Ca2+-cycling has led to an evolution of computational models that have played a crucial role in shaping our understanding of heart function. An important emerging theme in systems biology is that the detailed nature of local signaling events, such as those that occur in the cardiac dyad, have important consequences at higher biological scales. Multi-scale modeling techniques have revealed many mechanistic links between micro-scale events, such as Ca2+ binding to a channel protein, and macro-scale phenomena, such as excitation-contraction coupling gain. Here we review experimentally based multi-scale computational models of excitation-contraction coupling and the insights that have been gained through their application. PMID:21212390

  20. Fast spinning strings on η deformed AdS 5 × S 5

    NASA Astrophysics Data System (ADS)

    Banerjee, Aritra; Bhattacharyya, Arpan; Roychowdhury, Dibakar

    2018-02-01

    In this paper, considering the correspondence between spin chains and string sigma models, we explore the rotating string solutions over η deformed AdS 5 × S 5 in the so-called fast spinning limit. In our analysis, we focus only on the bosonic part of the full superstring action and compute the relevant limits on both ( R × S 3) η and ( R × S 5) η models. The resulting system reveals that in the fast spinning limit, the sigma model on η deformed S 5 could be approximately thought of as the continuum limit of anisotropic SU(3) Heisenberg spin chain model. We compute the energy for a certain class of spinning strings in deformed S 5 and we show that this energy can be mapped to that of a similar spinning string in the purely imaginary β deformed background.

  1. Computational model of polarized actin cables and cytokinetic actin ring formation in budding yeast

    PubMed Central

    Tang, Haosu; Bidone, Tamara C.

    2015-01-01

    The budding yeast actin cables and contractile ring are important for polarized growth and division, revealing basic aspects of cytoskeletal function. To study these formin-nucleated structures, we built a 3D computational model with actin filaments represented as beads connected by springs. Polymerization by formins at the bud tip and bud neck, crosslinking, severing, and myosin pulling, are included. Parameter values were estimated from prior experiments. The model generates actin cable structures and dynamics similar to those of wild type and formin deletion mutant cells. Simulations with increased polymerization rate result in long, wavy cables. Simulated pulling by type V myosin stretches actin cables. Increasing the affinity of actin filaments for the bud neck together with reduced myosin V pulling promotes the formation of a bundle of antiparallel filaments at the bud neck, which we suggest as a model for the assembly of actin filaments to the contractile ring. PMID:26538307

  2. Potts-model critical manifolds revisited

    DOE PAGES

    Scullard, Christian R.; Jacobsen, Jesper Lykke

    2016-02-11

    We compute the critical polynomials for the q-state Potts model on all Archimedean lattices, using a parallel implementation of the algorithm of Ref. [1] that gives us access to larger sizes than previously possible. The exact polynomials are computed for bases of size 6 6 unit cells, and the root in the temperature variable v = e K-1 is determined numerically at q = 1 for bases of size 8 8. This leads to improved results for bond percolation thresholds, and for the Potts-model critical manifolds in the real (q; v) plane. In the two most favourable cases, we findmore » now the kagome-lattice threshold to eleven digits and that of the (3; 12 2) lattice to thirteen. Our critical manifolds reveal many interesting features in the antiferromagnetic region of the Potts model, and determine accurately the extent of the Berker-Kadano phase for the lattices studied.« less

  3. Idea Paper: The Lifecycle of Software for Scientific Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Anshu; McInnes, Lois C.

    The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less

  4. Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective

    PubMed Central

    Mattout, Jérémie

    2012-01-01

    A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291

  5. Modelling and Simulation of Search Engine

    NASA Astrophysics Data System (ADS)

    Nasution, Mahyuddin K. M.

    2017-01-01

    The best tool currently used to access information is a search engine. Meanwhile, the information space has its own behaviour. Systematically, an information space needs to be familiarized with mathematics so easily we identify the characteristics associated with it. This paper reveal some characteristics of search engine based on a model of document collection, which are then estimated the impact on the feasibility of information. We reveal some of characteristics of search engine on the lemma and theorem about singleton and doubleton, then computes statistically characteristic as simulating the possibility of using search engine. In this case, Google and Yahoo. There are differences in the behaviour of both search engines, although in theory based on the concept of documents collection.

  6. Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue.

    PubMed

    D'Angelo, Egidio; Antonietti, Alberto; Casali, Stefano; Casellato, Claudia; Garrido, Jesus A; Luque, Niceto Rafael; Mapelli, Lisa; Masoli, Stefano; Pedrocchi, Alessandra; Prestori, Francesca; Rizza, Martina Francesca; Ros, Eduardo

    2016-01-01

    The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate "realistic" models using a bottom-up strategy accounted for both detailed connectivity and neuronal non-linear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closed-loop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems.

  7. Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue

    PubMed Central

    D’Angelo, Egidio; Antonietti, Alberto; Casali, Stefano; Casellato, Claudia; Garrido, Jesus A.; Luque, Niceto Rafael; Mapelli, Lisa; Masoli, Stefano; Pedrocchi, Alessandra; Prestori, Francesca; Rizza, Martina Francesca; Ros, Eduardo

    2016-01-01

    The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate “realistic” models using a bottom-up strategy accounted for both detailed connectivity and neuronal non-linear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closed-loop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems. PMID:27458345

  8. The possibility of coexistence and co-development in language competition: ecology-society computational model and simulation.

    PubMed

    Yun, Jian; Shang, Song-Chao; Wei, Xiao-Dan; Liu, Shuang; Li, Zhi-Jie

    2016-01-01

    Language is characterized by both ecological properties and social properties, and competition is the basic form of language evolution. The rise and decline of one language is a result of competition between languages. Moreover, this rise and decline directly influences the diversity of human culture. Mathematics and computer modeling for language competition has been a popular topic in the fields of linguistics, mathematics, computer science, ecology, and other disciplines. Currently, there are several problems in the research on language competition modeling. First, comprehensive mathematical analysis is absent in most studies of language competition models. Next, most language competition models are based on the assumption that one language in the model is stronger than the other. These studies tend to ignore cases where there is a balance of power in the competition. The competition between two well-matched languages is more practical, because it can facilitate the co-development of two languages. A third issue with current studies is that many studies have an evolution result where the weaker language inevitably goes extinct. From the integrated point of view of ecology and sociology, this paper improves the Lotka-Volterra model and basic reaction-diffusion model to propose an "ecology-society" computational model for describing language competition. Furthermore, a strict and comprehensive mathematical analysis was made for the stability of the equilibria. Two languages in competition may be either well-matched or greatly different in strength, which was reflected in the experimental design. The results revealed that language coexistence, and even co-development, are likely to occur during language competition.

  9. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  10. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  11. A new method for constructing networks from binary data

    NASA Astrophysics Data System (ADS)

    van Borkulo, Claudia D.; Borsboom, Denny; Epskamp, Sacha; Blanken, Tessa F.; Boschloo, Lynn; Schoevers, Robert A.; Waldorp, Lourens J.

    2014-08-01

    Network analysis is entering fields where network structures are unknown, such as psychology and the educational sciences. A crucial step in the application of network models lies in the assessment of network structure. Current methods either have serious drawbacks or are only suitable for Gaussian data. In the present paper, we present a method for assessing network structures from binary data. Although models for binary data are infamous for their computational intractability, we present a computationally efficient model for estimating network structures. The approach, which is based on Ising models as used in physics, combines logistic regression with model selection based on a Goodness-of-Fit measure to identify relevant relationships between variables that define connections in a network. A validation study shows that this method succeeds in revealing the most relevant features of a network for realistic sample sizes. We apply our proposed method to estimate the network of depression and anxiety symptoms from symptom scores of 1108 subjects. Possible extensions of the model are discussed.

  12. Mechanism of amido-thiourea catalyzed enantioselective imine hydrocyanation: transition state stabilization via multiple non-covalent interactions.

    PubMed

    Zuend, Stephan J; Jacobsen, Eric N

    2009-10-28

    An experimental and computational investigation of amido-thiourea promoted imine hydrocyanation has revealed a new and unexpected mechanism of catalysis. Rather than direct activation of the imine by the thiourea, as had been proposed previously in related systems, the data are consistent with a mechanism involving catalyst-promoted proton transfer from hydrogen isocyanide to imine to generate diastereomeric iminium/cyanide ion pairs that are bound to catalyst through multiple noncovalent interactions; these ion pairs collapse to form the enantiomeric alpha-aminonitrile products. This mechanistic proposal is supported by the observation of a statistically significant correlation between experimental and calculated enantioselectivities induced by eight different catalysts (P < 0.01). The computed models reveal a basis for enantioselectivity that involves multiple stabilizing and destabilizing interactions between substrate and catalyst, including thiourea-cyanide and amide-iminium interactions.

  13. Analysis of HRCT-derived xylem network reveals reverse flow in some vessels

    USDA-ARS?s Scientific Manuscript database

    Flow in xylem vessels is modeled based on constructions of three dimensional xylem networks derived from High Resolution Computed Tomography (HRCT) images of grapevine (Vitis vinifera) stems. Flow in 6-14% of the vessels was found to be oriented in the opposite direction to the bulk flow under norma...

  14. Preparing Computing Students for Culturally Diverse E-Mediated IT Projects

    ERIC Educational Resources Information Center

    Conrad, Marc; French, Tim; Maple, Carsten; Zhang, Sijing

    2006-01-01

    In this paper we present an account of an undergraduate team-based assignment designed to facilitate, exhibit and record team-working skills in an e-mediated environment. By linking the student feedback received to Hofstede's classic model of cultural dimensions we aim to show the assignment's suitability in revealing the student's multi-cultural…

  15. Biomaterial science meets computational biology.

    PubMed

    Hutmacher, Dietmar W; Little, J Paige; Pettet, Graeme J; Loessner, Daniela

    2015-05-01

    There is a pressing need for a predictive tool capable of revealing a holistic understanding of fundamental elements in the normal and pathological cell physiology of organoids in order to decipher the mechanoresponse of cells. Therefore, the integration of a systems bioengineering approach into a validated mathematical model is necessary to develop a new simulation tool. This tool can only be innovative by combining biomaterials science with computational biology. Systems-level and multi-scale experimental data are incorporated into a single framework, thus representing both single cells and collective cell behaviour. Such a computational platform needs to be validated in order to discover key mechano-biological factors associated with cell-cell and cell-niche interactions.

  16. Modeling the C. elegans nematode and its environment using a particle system.

    PubMed

    Rönkkö, Mauno; Wong, Garry

    2008-07-21

    A particle system, as understood in computer science, is a novel technique for modeling living organisms in their environment. Such particle systems have traditionally been used for modeling the complex dynamics of fluids and gases. In the present study, a particle system was devised to model the movement and feeding behavior of the nematode Caenorhabditis elegans in three different virtual environments: gel, liquid, and soil. The results demonstrate that distinct movements of the nematode can be attributed to its mechanical interactions with the virtual environment. These results also revealed emergent properties associated with modeling organisms within environment-based systems.

  17. The Lagrangian Ensemble metamodel for simulating plankton ecosystems

    NASA Astrophysics Data System (ADS)

    Woods, J. D.

    2005-10-01

    This paper presents a detailed account of the Lagrangian Ensemble (LE) metamodel for simulating plankton ecosystems. It uses agent-based modelling to describe the life histories of many thousands of individual plankters. The demography of each plankton population is computed from those life histories. So too is bio-optical and biochemical feedback to the environment. The resulting “virtual ecosystem” is a comprehensive simulation of the plankton ecosystem. It is based on phenotypic equations for individual micro-organisms. LE modelling differs significantly from population-based modelling. The latter uses prognostic equations to compute demography and biofeedback directly. LE modelling diagnoses them from the properties of individual micro-organisms, whose behaviour is computed from prognostic equations. That indirect approach permits the ecosystem to adjust gracefully to changes in exogenous forcing. The paper starts with theory: it defines the Lagrangian Ensemble metamodel and explains how LE code performs a number of computations “behind the curtain”. They include budgeting chemicals, and deriving biofeedback and demography from individuals. The next section describes the practice of LE modelling. It starts with designing a model that complies with the LE metamodel. Then it describes the scenario for exogenous properties that provide the computation with initial and boundary conditions. These procedures differ significantly from those used in population-based modelling. The next section shows how LE modelling is used in research, teaching and planning. The practice depends largely on hindcasting to overcome the limits to predictability of weather forecasting. The scientific method explains observable ecosystem phenomena in terms of finer-grained processes that cannot be observed, but which are controlled by the basic laws of physics, chemistry and biology. What-If? Prediction ( WIP), used for planning, extends hindcasting by adding events that describe natural or man-made hazards and remedial actions. Verification is based on the Ecological Turing Test, which takes account of uncertainties in the observed and simulated versions of a target ecological phenomenon. The rest of the paper is devoted to a case study designed to show what LE modelling offers the biological oceanographer. The case study is presented in two parts. The first documents the WB model (Woods & Barkmann, 1994) and scenario used to simulate the ecosystem in a mesocosm moored in deep water off the Azores. The second part illustrates the emergent properties of that virtual ecosystem. The behaviour and development of an individual plankton lineage are revealed by an audit trail of the agent used in the computation. The fields of environmental properties reveal the impact of biofeedback. The fields of demographic properties show how changes in individuals cumulatively affect the birth and death rates of their population. This case study documents the virtual ecosystem used by Woods, Perilli and Barkmann (2005; hereafter WPB); to investigate the stability of simulations created by the Lagrangian Ensemble metamodel. The Azores virtual ecosystem was created and analysed on the Virtual Ecology Workbench (VEW) which is described briefly in the Appendix.

  18. Fluid-Structure Interaction and Structural Analyses using a Comprehensive Mitral Valve Model with 3D Chordal Structure

    PubMed Central

    Toma, Milan; Einstein, Daniel R.; Bloodworth, Charles H.; Cochran, Richard P.; Yoganathan, Ajit P.; Kunzelman, Karyn S.

    2016-01-01

    Over the years, three-dimensional models of the mitral valve have generally been organized around a simplified anatomy. Leaflets have been typically modeled as membranes, tethered to discrete chordae typically modeled as one-dimensional, non-linear cables. Yet, recent, high-resolution medical images have revealed that there is no clear boundary between the chordae and the leaflets. In fact, the mitral valve has been revealed to be more of a webbed structure whose architecture is continuous with the chordae and their extensions into the leaflets. Such detailed images can serve as the basis of anatomically accurate, subject-specific models, wherein the entire valve is modeled with solid elements that more faithfully represent the chordae, the leaflets, and the transition between the two. These models have the potential to enhance our understanding of mitral valve mechanics, and to re-examine the role of the mitral valve chordae, which heretofore have been considered to be “invisible” to the fluid and to be of secondary importance to the leaflets. However, these new models also require a rethinking of modeling assumptions. In this study, we examine the conventional practice of loading the leaflets only and not the chordae in order to study the structural response of the mitral valve apparatus. Specifically, we demonstrate that fully resolved 3D models of the mitral valve require a fluid-structure interaction analysis to correctly load the valve even in the case of quasi-static mechanics. While a fluid-structure interaction mode is still more computationally expensive than a structural-only model, we also show that advances in GPU computing have made such models tractable. PMID:27342229

  19. Fluid-structure interaction and structural analyses using a comprehensive mitral valve model with 3D chordal structure.

    PubMed

    Toma, Milan; Einstein, Daniel R; Bloodworth, Charles H; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2017-04-01

    Over the years, three-dimensional models of the mitral valve have generally been organized around a simplified anatomy. Leaflets have been typically modeled as membranes, tethered to discrete chordae typically modeled as one-dimensional, non-linear cables. Yet, recent, high-resolution medical images have revealed that there is no clear boundary between the chordae and the leaflets. In fact, the mitral valve has been revealed to be more of a webbed structure whose architecture is continuous with the chordae and their extensions into the leaflets. Such detailed images can serve as the basis of anatomically accurate, subject-specific models, wherein the entire valve is modeled with solid elements that more faithfully represent the chordae, the leaflets, and the transition between the two. These models have the potential to enhance our understanding of mitral valve mechanics and to re-examine the role of the mitral valve chordae, which heretofore have been considered to be 'invisible' to the fluid and to be of secondary importance to the leaflets. However, these new models also require a rethinking of modeling assumptions. In this study, we examine the conventional practice of loading the leaflets only and not the chordae in order to study the structural response of the mitral valve apparatus. Specifically, we demonstrate that fully resolved 3D models of the mitral valve require a fluid-structure interaction analysis to correctly load the valve even in the case of quasi-static mechanics. While a fluid-structure interaction mode is still more computationally expensive than a structural-only model, we also show that advances in GPU computing have made such models tractable. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Applying the technology acceptance model to explore public health nurses' intentions towards web-based learning: a cross-sectional questionnaire survey.

    PubMed

    Chen, I Ju; Yang, Kuei-Feng; Tang, Fu-In; Huang, Chun-Hsia; Yu, Shu

    2008-06-01

    In the era of the knowledge economy, public health nurses (PHNs) need to update their knowledge to ensure quality of care. In pre-implementation stage, policy makers and educators should understand PHNs' behavioural intentions (BI) toward web-based learning because it is the most important determinant of actual behaviour. To understand PHNs' BI toward web-based learning and further to identify the factors influencing PHNs' BI based on the technology acceptance model (TAM) in pre-implementation stage. A nationwide-based cross-sectional research design was used in this study. Three hundred and sixty-nine health centres in Taiwan. A randomly selected sample, 202 PHNs participated in this study. Data were collected by mailing in a questionnaire. The majority of PHNs (91.6%, n=185) showed an affirmative BI toward web-based learning. PHNs rated moderate values of perceived usefulness (U), perceived ease of use (EOU) and attitude toward web-based learning (A). Multiple regression analyses indicated that only U revealed a significantly direct influence on BI. U and EOU had significantly direct relationships with A; however, no significant relationship existed between A and BI. Additionally, EOU and an individual's computer competence revealed significant relationships with U; Internet access at the workplace revealed a significant relationship with EOU. In the pre-implementation stage, PHNs perceived a high likelihood of adopting web-based learning as their way of continuing education. In pre-implementation stage, perceived usefulness is the most important factor for BI instead of the attitude. Perceived EOU, an individual's computer competency, and Internet access at workplaces revealed indirect effects on BI. Therefore, increasing U, EOU, computer competence, and Internet access at workplace will be helpful in increasing PHNs' BI. Moreover, we suggest that future studies should focus on clarifying problems in different stages of implementation to build a more complete understanding of implementing web-based learning.

  1. Employing static excitation control and tie line reactance to stabilize wind turbine generators

    NASA Technical Reports Server (NTRS)

    Hwang, H. H.; Mozeico, H. V.; Guo, T.

    1978-01-01

    An analytical representation of a wind turbine generator is presented which employs blade pitch angle feedback control. A mathematical model was formulated. With the functioning MOD-0 wind turbine serving as a practical case study, results of computer simulations of the model as applied to the problem of dynamic stability at rated load are also presented. The effect of the tower shadow was included in the input to the system. Different configurations of the drive train, and optimal values of the tie line reactance were used in the simulations. Computer results revealed that a static excitation control system coupled with optimal values of the tie line reactance would effectively reduce oscillations of the power output, without the use of a slip clutch.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This journal contains 7 articles pertaining to astrophysics. The first article is an overview of the other 6 articles and also a tribute to Jim Wilson and his work in the fields of general relativity and numerical astrophysics. The six articles are on the following subjects: (1) computer simulations of black hole accretion; (2) calculations on the collapse of the iron core of a massive star; (3) stellar-collapse models which reveal a possible site for nucleosynthesis of elements heavier than iron; (4) modeling sources for gravitational radiation; (5) the development of a computer program for finite-difference mesh calculations and itsmore » applications to astrophysics; (6) the existence of neutrinos with nonzero rest mass are used to explain the universe. Abstracts of each of the articles were prepared separately. (SC)« less

  3. Spent nuclear fuel assembly inspection using neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Pope, Chad Lee

    The research presented here focuses on spent nuclear fuel assembly inspection using neutron computed tomography. Experimental measurements involving neutron beam transmission through a spent nuclear fuel assembly serve as benchmark measurements for an MCNP simulation model. Comparison of measured results to simulation results shows good agreement. Generation of tomography images from MCNP tally results was accomplished using adapted versions of built in MATLAB algorithms. Multiple fuel assembly models were examined to provide a broad set of conclusions. Tomography images revealing assembly geometric information including the fuel element lattice structure and missing elements can be obtained using high energy neutrons. A projection difference technique was developed which reveals the substitution of unirradiated fuel elements for irradiated fuel elements, using high energy neutrons. More subtle material differences such as altering the burnup of individual elements can be identified with lower energy neutrons provided the scattered neutron contribution to the image is limited. The research results show that neutron computed tomography can be used to inspect spent nuclear fuel assemblies for the purpose of identifying anomalies such as missing elements or substituted elements. The ability to identify anomalies in spent fuel assemblies can be used to deter diversion of material by increasing the risk of early detection as well as improve reprocessing facility operations by confirming the spent fuel configuration is as expected or allowing segregation if anomalies are detected.

  4. Mathematical modeling and computational prediction of cancer drug resistance.

    PubMed

    Sun, Xiaoqiang; Hu, Bin

    2017-06-23

    Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. 3D Printing of Plant Golgi Stacks from Their Electron Tomographic Models.

    PubMed

    Mai, Keith Ka Ki; Kang, Madison J; Kang, Byung-Ho

    2017-01-01

    Three-dimensional (3D) printing is an effective tool for preparing tangible 3D models from computer visualizations to assist in scientific research and education. With the recent popularization of 3D printing processes, it is now possible for individual laboratories to convert their scientific data into a physical form suitable for presentation or teaching purposes. Electron tomography is an electron microscopy method by which 3D structures of subcellular organelles or macromolecular complexes are determined at nanometer-level resolutions. Electron tomography analyses have revealed the convoluted membrane architectures of Golgi stacks, chloroplasts, and mitochondria. But the intricacy of their 3D organizations is difficult to grasp from tomographic models illustrated on computer screens. Despite the rapid development of 3D printing technologies, production of organelle models based on experimental data with 3D printing has rarely been documented. In this chapter, we present a simple guide to creating 3D prints of electron tomographic models of plant Golgi stacks using the two most accessible 3D printing technologies.

  6. Computational Analysis of Stresses Acting on Intermodular Junctions in Thoracic Aortic Endografts

    PubMed Central

    Prasad, Anamika; To, Lillian K.; Gorrepati, Madhu L.; Zarins, Christopher K.; Figueroa, C. Alberto

    2011-01-01

    Purpose: To evaluate the biomechanical and hemodynamic forces acting on the intermodular junctions of a multi-component thoracic endograft and elucidate their influence on the development of type III endoleak due to disconnection of stent-graft segments. Methods: Three-dimensional computer models of the thoracic aorta and a 4-component thoracic endograft were constructed using postoperative (baseline) and follow-up computed tomography (CT) data from a 69-year-old patient who developed type III endoleak 4 years after stent-graft placement. Computational fluid dynamics (CFD) techniques were used to quantitate the displacement forces acting on the device. The contact stresses between the different modules of the graft were then quantified using computational solid mechanics (CSM) techniques. Lastly, the intermodular junction frictional stability was evaluated using a Coulomb model. Results: The CFD analysis revealed that curvature and length are key determinants of the displacement forces experienced by each endograft and that the first 2 modules were exposed to displacement forces acting in opposite directions in both the lateral and longitudinal axes. The CSM analysis revealed that the highest concentration of stresses occurred at the junction between the first and second modules of the device. Furthermore, the frictional analysis demonstrated that most of the surface area (53%) of this junction had unstable contact. The predicted critical zone of intermodular stress concentration and frictional instability matched the location of the type III endoleak observed in the 4-year follow-up CT image. Conclusion: The region of larger intermodular stresses and highest frictional instability correlated with the zone where a type III endoleak developed 4 years after thoracic stent-graft placement. Computational techniques can be helpful in evaluating the risk of endograft migration and potential for modular disconnection and may be useful in improving device placement strategies and endograft design. PMID:21861748

  7. Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy

    PubMed Central

    Ackermann, Marko; van den Bogert, Antonie J.

    2012-01-01

    The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. PMID:22365845

  8. Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy.

    PubMed

    Ackermann, Marko; van den Bogert, Antonie J

    2012-04-30

    The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Mining data from CFD simulation for aneurysm and carotid bifurcation models.

    PubMed

    Miloš, Radović; Dejan, Petrović; Nenad, Filipović

    2011-01-01

    Arterial geometry variability is present both within and across individuals. To analyze the influence of geometric parameters, blood density, dynamic viscosity and blood velocity on wall shear stress (WSS) distribution in the human carotid artery bifurcation and aneurysm, the computer simulations were run to generate the data pertaining to this phenomenon. In our work we evaluate two prediction models for modeling these relationships: neural network model and k-nearest neighbor model. The results revealed that both models have high prediction ability for this prediction task. The achieved results represent progress in assessment of stroke risk for a given patient data in real time.

  10. Systems cell biology

    PubMed Central

    Mast, Fred D.; Ratushny, Alexander V.

    2014-01-01

    Systems cell biology melds high-throughput experimentation with quantitative analysis and modeling to understand many critical processes that contribute to cellular organization and dynamics. Recently, there have been several advances in technology and in the application of modeling approaches that enable the exploration of the dynamic properties of cells. Merging technology and computation offers an opportunity to objectively address unsolved cellular mechanisms, and has revealed emergent properties and helped to gain a more comprehensive and fundamental understanding of cell biology. PMID:25225336

  11. Computational models for the berry phase in semiconductor quantum dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhakar, S., E-mail: rmelnik@wlu.ca; Melnik, R. V. N., E-mail: rmelnik@wlu.ca; Sebetci, A.

    2014-10-06

    By developing a new model and its finite element implementation, we analyze the Berry phase low-dimensional semiconductor nanostructures, focusing on quantum dots (QDs). In particular, we solve the Schrödinger equation and investigate the evolution of the spin dynamics during the adiabatic transport of the QDs in the 2D plane along circular trajectory. Based on this study, we reveal that the Berry phase is highly sensitive to the Rashba and Dresselhaus spin-orbit lengths.

  12. Hierarchical modularization of biochemical pathways using fuzzy-c means clustering.

    PubMed

    de Luis Balaguer, Maria A; Williams, Cranos M

    2014-08-01

    Biological systems that are representative of regulatory, metabolic, or signaling pathways can be highly complex. Mathematical models that describe such systems inherit this complexity. As a result, these models can often fail to provide a path toward the intuitive comprehension of these systems. More coarse information that allows a perceptive insight of the system is sometimes needed in combination with the model to understand control hierarchies or lower level functional relationships. In this paper, we present a method to identify relationships between components of dynamic models of biochemical pathways that reside in different functional groups. We find primary relationships and secondary relationships. The secondary relationships reveal connections that are present in the system, which current techniques that only identify primary relationships are unable to show. We also identify how relationships between components dynamically change over time. This results in a method that provides the hierarchy of the relationships among components, which can help us to understand the low level functional structure of the system and to elucidate potential hierarchical control. As a proof of concept, we apply the algorithm to the epidermal growth factor signal transduction pathway, and to the C3 photosynthesis pathway. We identify primary relationships among components that are in agreement with previous computational decomposition studies, and identify secondary relationships that uncover connections among components that current computational approaches were unable to reveal.

  13. Best-next-view algorithm for three-dimensional scene reconstruction using range images

    NASA Astrophysics Data System (ADS)

    Banta, J. E.; Zhien, Yu; Wang, X. Z.; Zhang, G.; Smith, M. T.; Abidi, Mongi A.

    1995-10-01

    The primary focus of the research detailed in this paper is to develop an intelligent sensing module capable of automatically determining the optimal next sensor position and orientation during scene reconstruction. To facilitate a solution to this problem, we have assembled a system for reconstructing a 3D model of an object or scene from a sequence of range images. Candidates for the best-next-view position are determined by detecting and measuring occlusions to the range camera's view in an image. Ultimately, the candidate which will reveal the greatest amount of unknown scene information is selected as the best-next-view position. Our algorithm uses ray tracing to determine how much new information a given sensor perspective will reveal. We have tested our algorithm successfully on several synthetic range data streams, and found the system's results to be consistent with an intuitive human search. The models recovered by our system from range data compared well with the ideal models. Essentially, we have proven that range information of physical objects can be employed to automatically reconstruct a satisfactory dynamic 3D computer model at a minimal computational expense. This has obvious implications in the contexts of robot navigation, manufacturing, and hazardous materials handling. The algorithm we developed takes advantage of no a priori information in finding the best-next-view position.

  14. Big Data Processing for a Central Texas Groundwater Case Study

    NASA Astrophysics Data System (ADS)

    Cantu, A.; Rivera, O.; Martínez, A.; Lewis, D. H.; Gentle, J. N., Jr.; Fuentes, G.; Pierce, S. A.

    2016-12-01

    As computational methods improve, scientists are able to expand the level and scale of experimental simulation and testing that is completed for case studies. This study presents a comparative analysis of multiple models for the Barton Springs segment of the Edwards aquifer. Several numerical simulations using state-mandated MODFLOW models ran on Stampede, a High Performance Computing system housed at the Texas Advanced Computing Center, were performed for multiple scenario testing. One goal of this multidisciplinary project aims to visualize and compare the output data of the groundwater model using the statistical programming language R to find revealing data patterns produced by different pumping scenarios. Presenting data in a friendly post-processing format is covered in this paper. Visualization of the data and creating workflows applicable to the management of the data are tasks performed after data extraction. Resulting analyses provide an example of how supercomputing can be used to accelerate evaluation of scientific uncertainty and geological knowledge in relation to policy and management decisions. Understanding the aquifer behavior helps policy makers avoid negative impact on the endangered species, environmental services and aids in maximizing the aquifer yield.

  15. Fundamental analysis of the failure of polymer-based fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Kanninen, M. F.; Rybicki, E. F.; Griffith, W. I.; Broek, D.

    1976-01-01

    A mathematical model is described which will permit predictions of the strength of fiber reinforced composites containing known flaws to be made from the basic properties of their constituents. The approach was to embed a local heterogeneous region (LHR) surrounding the crack tip into an anisotropic elastic continuum. The model should (1) permit an explicit analysis of the micromechanical processes involved in the fracture process, and (2) remain simple enough to be useful in practical computations. Computations for arbitrary flaw size and orientation under arbitrary applied load combinations were performed from unidirectional composites with linear elastic-brittle constituent behavior. The mechanical properties were nominally those of graphite epoxy. With the rupture properties arbitrarily varied to test the capability of the model to reflect real fracture modes in fiber composites, it was shown that fiber breakage, matrix crazing, crack bridging, matrix-fiber debonding, and axial splitting can all occur during a period of (gradually) increasing load prior to catastrophic fracture. The computations reveal qualitatively the sequential nature of the stable crack process that precedes fracture.

  16. Quantification of the transferability of a designed protein specificity switch reveals extensive epistasis in molecular recognition

    DOE PAGES

    Melero, Cristina; Ollikainen, Noah; Harwood, Ian; ...

    2014-10-13

    Re-engineering protein–protein recognition is an important route to dissecting and controlling complex interaction networks. Experimental approaches have used the strategy of “second-site suppressors,” where a functional interaction is inferred between two proteins if a mutation in one protein can be compensated by a mutation in the second. Mimicking this strategy, computational design has been applied successfully to change protein recognition specificity by predicting such sets of compensatory mutations in protein–protein interfaces. To extend this approach, it would be advantageous to be able to “transplant” existing engineered and experimentally validated specificity changes to other homologous protein–protein complexes. Here, we test thismore » strategy by designing a pair of mutations that modulates peptide recognition specificity in the Syntrophin PDZ domain, confirming the designed interaction biochemically and structurally, and then transplanting the mutations into the context of five related PDZ domain–peptide complexes. We find a wide range of energetic effects of identical mutations in structurally similar positions, revealing a dramatic context dependence (epistasis) of designed mutations in homologous protein–protein interactions. To better understand the structural basis of this context dependence, we apply a structure-based computational model that recapitulates these energetic effects and we use this model to make and validate forward predictions. The context dependence of these mutations is captured by computational predictions, our results both highlight the considerable difficulties in designing protein–protein interactions and provide challenging benchmark cases for the development of improved protein modeling and design methods that accurately account for the context.« less

  17. Quantification of the transferability of a designed protein specificity switch reveals extensive epistasis in molecular recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melero, Cristina; Ollikainen, Noah; Harwood, Ian

    Re-engineering protein–protein recognition is an important route to dissecting and controlling complex interaction networks. Experimental approaches have used the strategy of “second-site suppressors,” where a functional interaction is inferred between two proteins if a mutation in one protein can be compensated by a mutation in the second. Mimicking this strategy, computational design has been applied successfully to change protein recognition specificity by predicting such sets of compensatory mutations in protein–protein interfaces. To extend this approach, it would be advantageous to be able to “transplant” existing engineered and experimentally validated specificity changes to other homologous protein–protein complexes. Here, we test thismore » strategy by designing a pair of mutations that modulates peptide recognition specificity in the Syntrophin PDZ domain, confirming the designed interaction biochemically and structurally, and then transplanting the mutations into the context of five related PDZ domain–peptide complexes. We find a wide range of energetic effects of identical mutations in structurally similar positions, revealing a dramatic context dependence (epistasis) of designed mutations in homologous protein–protein interactions. To better understand the structural basis of this context dependence, we apply a structure-based computational model that recapitulates these energetic effects and we use this model to make and validate forward predictions. The context dependence of these mutations is captured by computational predictions, our results both highlight the considerable difficulties in designing protein–protein interactions and provide challenging benchmark cases for the development of improved protein modeling and design methods that accurately account for the context.« less

  18. REVIEW: Widespread access to predictive models in the motor system: a short review

    NASA Astrophysics Data System (ADS)

    Davidson, Paul R.; Wolpert, Daniel M.

    2005-09-01

    Recent behavioural and computational studies suggest that access to internal predictive models of arm and object dynamics is widespread in the sensorimotor system. Several systems, including those responsible for oculomotor and skeletomotor control, perceptual processing, postural control and mental imagery, are able to access predictions of the motion of the arm. A capacity to make and use predictions of object dynamics is similarly widespread. Here, we review recent studies looking at the predictive capacity of the central nervous system which reveal pervasive access to forward models of the environment.

  19. Mapping snow depth return levels: smooth spatial modeling versus station interpolation

    NASA Astrophysics Data System (ADS)

    Blanchet, J.; Lehning, M.

    2010-12-01

    For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.

  20. A framework for analyzing the cognitive complexity of computer-assisted clinical ordering.

    PubMed

    Horsky, Jan; Kaufman, David R; Oppenheim, Michael I; Patel, Vimla L

    2003-01-01

    Computer-assisted provider order entry is a technology that is designed to expedite medical ordering and to reduce the frequency of preventable errors. This paper presents a multifaceted cognitive methodology for the characterization of cognitive demands of a medical information system. Our investigation was informed by the distributed resources (DR) model, a novel approach designed to describe the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction. We conducted an expert walkthrough evaluation of a commercial order entry system, followed by a simulated clinical ordering task performed by seven clinicians. The DR model was employed to explain variation in user performance and to characterize the relationship of resource distribution and ordering errors. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The resources model also provided some insight into clinicians' interactive strategies and patterns of associated errors. Implications for user training and interface design based on the principles of human-computer interaction in the medical domain are discussed.

  1. ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck

    2017-04-01

    This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.

  2. Mechanism and computational model for Lyman-α-radiation generation by high-intensity-laser four-wave mixing in Kr-Ar gas

    NASA Astrophysics Data System (ADS)

    Louchev, Oleg A.; Bakule, Pavel; Saito, Norihito; Wada, Satoshi; Yokoyama, Koji; Ishida, Katsuhiko; Iwasaki, Masahiko

    2011-09-01

    We present a theoretical model combined with a computational study of a laser four-wave mixing process under optical discharge in which the non-steady-state four-wave amplitude equations are integrated with the kinetic equations of initial optical discharge and electron avalanche ionization in Kr-Ar gas. The model is validated by earlier experimental data showing strong inhibition of the generation of pulsed, tunable Lyman-α (Ly-α) radiation when using sum-difference frequency mixing of 212.6 nm and tunable infrared radiation (820-850 nm). The rigorous computational approach to the problem reveals the possibility and mechanism of strong auto-oscillations in sum-difference resonant Ly-α generation due to the combined effect of (i) 212.6-nm (2+1)-photon ionization producing initial electrons, followed by (ii) the electron avalanche dominated by 843-nm radiation, and (iii) the final breakdown of the phase matching condition. The model shows that the final efficiency of Ly-α radiation generation can achieve a value of ˜5×10-4 which is restricted by the total combined absorption of the fundamental and generated radiation.

  3. Adaptive Wavelet Modeling of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H.; Dahmen, W.; Vorloeper, J.

    2009-12-01

    Despite the ever-increasing power of modern computers, realistic modeling of complex three-dimensional Earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modeling approaches includes either finite difference or non-adaptive finite element algorithms, and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behavior of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modeled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet based approach that is applicable to a large scope of problems, also including nonlinear problems. To the best of our knowledge such algorithms have not yet been applied in geophysics. Adaptive wavelet algorithms offer several attractive features: (i) for a given subsurface model, they allow the forward modeling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient, and (iii) the modeling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving three-dimensional geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best fit subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectrical modeling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with spatially highly variable electrical conductivities. The linear dependency of the modeling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  4. Surface Electrochemistry of Metals

    DTIC Science & Technology

    1993-04-30

    maxima along the 12 directions of open channels .vhich are also the interatomic directions). Elastic scattering angular distributions always contain... scatterer geometric relationships for such samples. Distributions from ordered atomic bilayers reveal that the Auger signal from the underlayer is attenuated...are developing a theoretical model and computational code which include both elastic scattering and inhomogeneous inelastic scattering . We seek

  5. Users' Perceptions of the Web As Revealed by Transaction Log Analysis.

    ERIC Educational Resources Information Center

    Moukdad, Haidar; Large, Andrew

    2001-01-01

    Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…

  6. A Contrastive Study on Metadiscourse Elements Used in Humanities vs. Non Humanities across Persian and English

    ERIC Educational Resources Information Center

    Zarei, Gholam Reza; Mansoori, Sara

    2011-01-01

    The present study studied contrastively the use of metadiscourse in two disciplines (applied linguistics vs. computer engineering) across two languages (Persian and English). The selected corpus was analyzed through the model suggested by Hyland and Tse (2004). The results revealed the metadiscursive resources are used differently both within and…

  7. Comparative study of transient hydraulic tomography with varying parameterizations and zonations: Laboratory sandbox investigation

    NASA Astrophysics Data System (ADS)

    Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.

    2017-11-01

    Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.

  8. Mechanical unfolding reveals stable 3-helix intermediates in talin and α-catenin

    PubMed Central

    2018-01-01

    Mechanical stability is a key feature in the regulation of structural scaffolding proteins and their functions. Despite the abundance of α-helical structures among the human proteome and their undisputed importance in health and disease, the fundamental principles of their behavior under mechanical load are poorly understood. Talin and α-catenin are two key molecules in focal adhesions and adherens junctions, respectively. In this study, we used a combination of atomistic steered molecular dynamics (SMD) simulations, polyprotein engineering, and single-molecule atomic force microscopy (smAFM) to investigate unfolding of these proteins. SMD simulations revealed that talin rod α-helix bundles as well as α-catenin α-helix domains unfold through stable 3-helix intermediates. While the 5-helix bundles were found to be mechanically stable, a second stable conformation corresponding to the 3-helix state was revealed. Mechanically weaker 4-helix bundles easily unfolded into a stable 3-helix conformation. The results of smAFM experiments were in agreement with the findings of the computational simulations. The disulfide clamp mutants, designed to protect the stable state, support the 3-helix intermediate model in both experimental and computational setups. As a result, multiple discrete unfolding intermediate states in the talin and α-catenin unfolding pathway were discovered. Better understanding of the mechanical unfolding mechanism of α-helix proteins is a key step towards comprehensive models describing the mechanoregulation of proteins. PMID:29698481

  9. Mechanism of Amido-Thiourea Catalyzed Enantioselective Imine Hydrocyanation: Transition State Stabilization via Multiple Non-Covalent Interactions

    PubMed Central

    Zuend, Stephan J.

    2009-01-01

    An experimental and computational investigation of amido-thiourea promoted imine hydrocyanation has revealed a new and unexpected mechanism of catalysis. Rather than direct activation of the imine by the thiourea, as had been proposed previously in related systems, the data are consistent with a mechanism involving catalyst-promoted proton transfer from hydrogen isocyanide to imine to generate diastereomeric iminium/cyanide ion pairs that are bound to catalyst through multiple non-covalent interactions; these ion pairs collapse to form the enantiomeric α-aminonitrile products. This mechanistic proposal is supported by the observation of a statistically significant correlation between experimental and calculated enantioselectivities induced by eight different catalysts (P ≪ 0.01). The computed models reveal a basis for enantioselectivity that involves multiple stabilizing and destabilizing interactions between substrate and catalyst, including thiourea-cyanide and amide-iminium interactions. PMID:19778044

  10. Computational analysis of human and mouse CREB3L4 Protein

    PubMed Central

    Velpula, Kiran Kumar; Rehman, Azeem Abdul; Chigurupati, Soumya; Sanam, Ramadevi; Inampudi, Krishna Kishore; Akila, Chandra Sekhar

    2012-01-01

    CREB3L4 is a member of the CREB/ATF transcription factor family, characterized by their regulation of gene expression through the cAMP-responsive element. Previous studies identified this protein in mice and humans. Whereas CREB3L4 in mice (referred to as Tisp40) is found in the testes and functions in spermatogenesis, human CREB3L4 is primarily detected in the prostate and has been implicated in cancer. We conducted computational analyses to compare the structural homology between murine Tisp40α human CREB3L4. Our results reveal that the primary and secondary structures of the two proteins contain high similarity. Additionally, predicted helical transmembrane structure reveals that the proteins likely have similar structure and function. This study offers preliminary findings that support the translation of mouse Tisp40α findings into human models, based on structural homology. PMID:22829733

  11. Use of computer models to assess exposure to agricultural chemicals via drinking water.

    PubMed

    Gustafson, D I

    1995-10-27

    Surveys of drinking water quality throughout the agricultural regions of the world have revealed the tendency of certain crop protection chemicals to enter water supplies. Fortunately, the trace concentrations that have been detected are generally well below the levels thought to have any negative impact on human health or the environment. However, the public expects drinking water to be pristine and seems willing to bear the costs involved in further regulating agricultural chemical use in such a way so as to eliminate the potential for such materials to occur at any detectable level. Of all the tools available to assess exposure to agricultural chemicals via drinking water, computer models are one of the most cost-effective. Although not sufficiently predictive to be used in the absence of any field data, such computer programs can be used with some degree of certainty to perform quantitative extrapolations and thereby quantify regional exposure from field-scale monitoring information. Specific models and modeling techniques will be discussed for performing such exposure analyses. Improvements in computer technology have recently made it practical to use Monte Carlo and other probabilistic techniques as a routine tool for estimating human exposure. Such methods make it possible, at least in principle, to prepare exposure estimates with known confidence intervals and sufficient statistical validity to be used in the regulatory management of agricultural chemicals.

  12. Human Environmental Disease Network: A computational model to assess toxicology of contaminants.

    PubMed

    Taboureau, Olivier; Audouze, Karine

    2017-01-01

    During the past decades, many epidemiological, toxicological and biological studies have been performed to assess the role of environmental chemicals as potential toxicants associated with diverse human disorders. However, the relationships between diseases based on chemical exposure rarely have been studied by computational biology. We developed a human environmental disease network (EDN) to explore and suggest novel disease-disease and chemical-disease relationships. The presented scored EDN model is built upon the integration of systems biology and chemical toxicology using information on chemical contaminants and their disease relationships reported in the TDDB database. The resulting human EDN takes into consideration the level of evidence of the toxicant-disease relationships, allowing inclusion of some degrees of significance in the disease-disease associations. Such a network can be used to identify uncharacterized connections between diseases. Examples are discussed for type 2 diabetes (T2D). Additionally, this computational model allows confirmation of already known links between chemicals and diseases (e.g., between bisphenol A and behavioral disorders) and also reveals unexpected associations between chemicals and diseases (e.g., between chlordane and olfactory alteration), thus predicting which chemicals may be risk factors to human health. The proposed human EDN model allows exploration of common biological mechanisms of diseases associated with chemical exposure, helping us to gain insight into disease etiology and comorbidity. This computational approach is an alternative to animal testing supporting the 3R concept.

  13. Criteria for Modeling in LES of Multicomponent Fuel Flow

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Selle, Laurent

    2009-01-01

    A report presents a study addressing the question of which large-eddy simulation (LES) equations are appropriate for modeling the flow of evaporating drops of a multicomponent liquid in a gas (e.g., a spray of kerosene or diesel fuel in air). The LES equations are obtained from the direct numerical simulation (DNS) equations in which the solution is computed at all flow length scales, by applying a spatial low-pass filter. Thus, in LES the small scales are removed and replaced by terms that cannot be computed from the LES solution and instead must be modeled to retain the effect of the small scales into the equations. The mathematical form of these models is a subject of contemporary research. For a single-component liquid, there is only one LES formulation, but this study revealed that for a multicomponent liquid, there are two non-equivalent LES formulations for the conservation equations describing the composition of the vapor. Criteria were proposed for selecting the multicomponent LES formulation that gives the best accuracy and increased computational efficiency. These criteria were applied in examination of filtered DNS databases to compute the terms in the LES equations. The DNS databases are from mixing layers of diesel and kerosene fuels. The comparisons resulted in the selection of one of the multicomponent LES formulations as the most promising with respect to all criteria.

  14. Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.

    2010-08-01

    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  15. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy; Bhat, Sham; Marcy, Peter

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  16. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE PAGES

    Holland, Troy; Bhat, Sham; Marcy, Peter; ...

    2017-08-25

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  17. Brain CT image similarity retrieval method based on uncertain location graph.

    PubMed

    Pan, Haiwei; Li, Pengyuan; Li, Qing; Han, Qilong; Feng, Xiaoning; Gao, Linlin

    2014-03-01

    A number of brain computed tomography (CT) images stored in hospitals that contain valuable information should be shared to support computer-aided diagnosis systems. Finding the similar brain CT images from the brain CT image database can effectively help doctors diagnose based on the earlier cases. However, the similarity retrieval for brain CT images requires much higher accuracy than the general images. In this paper, a new model of uncertain location graph (ULG) is presented for brain CT image modeling and similarity retrieval. According to the characteristics of brain CT image, we propose a novel method to model brain CT image to ULG based on brain CT image texture. Then, a scheme for ULG similarity retrieval is introduced. Furthermore, an effective index structure is applied to reduce the searching time. Experimental results reveal that our method functions well on brain CT images similarity retrieval with higher accuracy and efficiency.

  18. Spreading dynamics on complex networks: a general stochastic approach.

    PubMed

    Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J

    2014-12-01

    Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.

  19. Computational approach for deriving cancer progression roadmaps from static sample data

    PubMed Central

    Yao, Jin; Yang, Le; Chen, Runpu; Nowak, Norma J.

    2017-01-01

    Abstract As with any biological process, cancer development is inherently dynamic. While major efforts continue to catalog the genomic events associated with human cancer, it remains difficult to interpret and extrapolate the accumulating data to provide insights into the dynamic aspects of the disease. Here, we present a computational strategy that enables the construction of a cancer progression model using static tumor sample data. The developed approach overcame many technical limitations of existing methods. Application of the approach to breast cancer data revealed a linear, branching model with two distinct trajectories for malignant progression. The validity of the constructed model was demonstrated in 27 independent breast cancer data sets, and through visualization of the data in the context of disease progression we were able to identify a number of potentially key molecular events in the advance of breast cancer to malignancy. PMID:28108658

  20. Development of Environmental Load Estimation Model for Road Drainage Systems in the Early Design Phase

    NASA Astrophysics Data System (ADS)

    Park, Jin-Young; Lee, Dong-Eun; Kim, Byung-Soo

    2017-10-01

    Due to the increasing concern about climate change, efforts to reduce environmental load are continuously being made in construction industry, and LCA (life cycle assessment) is being presented as an effective method to assess environmental load. Since LCA requires information on construction quantity used for environmental load estimation, however, it is not being utilized in the environmental review in the early design phase where it is difficult to obtain such information. In this study, computation system for construction quantity based on standard cross section of road drainage facilities was developed to compute construction quantity required for LCA using only information available in the early design phase to develop and verify the effectiveness of a model that can perform environmental load estimation. The result showed that it is an effective model that can be used in the early design phase as it revealed a 13.39% mean absolute error rate.

  1. Visualized modeling platform for virtual plant growth and monitoring on the internet

    NASA Astrophysics Data System (ADS)

    Zhou, De-fu; Tian, Feng-qui; Ren, Ping

    2009-07-01

    Virtual plant growth is a key research topic in Agriculture Information Technique and Computer Graphics. It has been applied in botany, agronomy, environmental sciences, computre sciences and applied mathematics. Modeling leaf color dynamics in plant is of significant importance for realizing virtual plant growth. Using systematic analysis method and dynamic modeling technology, a SPAD-based leaf color dynamic model was developed to simulate time-course change characters of leaf SPAD on the plant. In addition, process of plant growth can be computer-stimulated using Virtual Reality Modeling Language (VRML) to establish a vivid and visible model, including shooting, rooting, blooming, as well as growth of the stems and leaves. In the resistance environment, e.g., lacking of water, air or nutrient substances, high salt or alkaline, freezing injury, high temperature, suffering from diseases and insect pests, the changes from the level of whole plant to organs, tissues and cells could be computer-stimulated. Changes from physiological and biochemistry could also be described. When a series of indexes were input by the costumers, direct view and microcosmic changes could be shown. Thus, the model has a good performance in predicting growth condition of the plant, laying a foundation for further constructing virtual plant growth system. The results revealed that realistic physiological and pathological processes of 3D virtual plants could be demonstrated by proper design and effectively realized in the internet.

  2. Material-Model-Based Determination of the Shock-Hugoniot Relations in Nanosegregated Polyurea

    NASA Astrophysics Data System (ADS)

    Grujicic, Mica; Snipes, J. S.; Galgalikar, R.; Ramaswami, S.

    2014-02-01

    Previous experimental investigations reported in the open literature have indicated that applying polyurea external coatings and/or internal linings can substantially improve ballistic penetration resistance and blast survivability of buildings, vehicles, and laboratory/field test-plates, as well as the blast-mitigation capacity of combat helmets. The protective role of polyurea coatings/linings has been linked to polyurea microstructure, which consists of discrete hard-domains distributed randomly within a compliant/soft matrix. When this protective role is investigated computationally, the availability of reliable, high-fidelity constitutive models for polyurea is vitally important. In the present work, a comprehensive overview and a critical assessment of a polyurea material constitutive model, recently proposed by Shim and Mohr (Int J Plast 27:868-886, 2011), are carried out. The review revealed that this model can accurately account for the experimentally measured uniaxial-stress versus strain data obtained under monotonic and multistep compressive loading/unloading conditions, as well as under stress relaxation conditions. On the other hand, by combining analytical and finite-element procedures with the material model in order to define the basic shock-Hugoniot relations for this material, it was found that the computed shock-Hugoniot relations differ significantly from their experimental counterparts. Potential reasons for the disagreement between the computed and experimental shock-Hugoniot relations are identified.

  3. Computationally Guided Design of Polymer Electrolytes for Battery Applications

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-Gang; Webb, Michael; Savoie, Brett; Miller, Thomas

    We develop an efficient computational framework for guiding the design of polymer electrolytes for Li battery applications. Short-times molecular dynamics (MD) simulations are employed to identify key structural and dynamic features in the solvation and motion of Li ions, such as the structure of the solvation shells, the spatial distribution of solvation sites, and the polymer segmental mobility. Comparative studies on six polyester-based polymers and polyethylene oxide (PEO) yield good agreement with experimental data on the ion conductivities, and reveal significant differences in the ion diffusion mechanism between PEO and the polyesters. The molecular insights from the MD simulations are used to build a chemically specific coarse-grained model in the spirit of the dynamic bond percolation model of Druger, Ratner and Nitzan. We apply this coarse-grained model to characterize Li ion diffusion in several existing and yet-to-be synthesized polyethers that differ by oxygen content and backbone stiffness. Good agreement is obtained between the predictions of the coarse-grained model and long-timescale atomistic MD simulations, thus providing validation of the model. Our study predicts higher Li ion diffusivity in poly(trimethylene oxide-alt-ethylene oxide) than in PEO. These results demonstrate the potential of this computational framework for rapid screening of new polymer electrolytes based on ion diffusivity.

  4. Computational design of an endo-1,4-[beta]-xylanase ligand binding site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morin, Andrew; Kaufmann, Kristian W.; Fortenberry, Carie

    2012-09-05

    The field of computational protein design has experienced important recent success. However, the de novo computational design of high-affinity protein-ligand interfaces is still largely an open challenge. Using the Rosetta program, we attempted the in silico design of a high-affinity protein interface to a small peptide ligand. We chose the thermophilic endo-1,4-{beta}-xylanase from Nonomuraea flexuosa as the protein scaffold on which to perform our designs. Over the course of the study, 12 proteins derived from this scaffold were produced and assayed for binding to the target ligand. Unfortunately, none of the designed proteins displayed evidence of high-affinity binding. Structural characterizationmore » of four designed proteins revealed that although the predicted structure of the protein model was highly accurate, this structural accuracy did not translate into accurate prediction of binding affinity. Crystallographic analyses indicate that the lack of binding affinity is possibly due to unaccounted for protein dynamics in the 'thumb' region of our design scaffold intrinsic to the family 11 {beta}-xylanase fold. Further computational analysis revealed two specific, single amino acid substitutions responsible for an observed change in backbone conformation, and decreased dynamic stability of the catalytic cleft. These findings offer new insight into the dynamic and structural determinants of the {beta}-xylanase proteins.« less

  5. Computational design of environmental sensors for the potent opioid fentanyl

    DOE PAGES

    Bick, Matthew J.; Greisen, Per J.; Morey, Kevin J.; ...

    2017-09-19

    Here, we describe the computational design of proteins that bind the potent analgesic fentanyl. Our approach employs a fast docking algorithm to find shape complementary ligand placement in protein scaffolds, followed by design of the surrounding residues to optimize binding affinity. Co-crystal structures of the highest affinity binder reveal a highly preorganized binding site, and an overall architecture and ligand placement in close agreement with the design model. We also use the designs to generate plant sensors for fentanyl by coupling ligand binding to design stability. The method should be generally useful for detecting toxic hydrophobic compounds in the environment.

  6. Computational design of environmental sensors for the potent opioid fentanyl

    PubMed Central

    Morey, Kevin J; Antunes, Mauricio S; La, David; Sankaran, Banumathi; Reymond, Luc; Johnsson, Kai; Medford, June I

    2017-01-01

    We describe the computational design of proteins that bind the potent analgesic fentanyl. Our approach employs a fast docking algorithm to find shape complementary ligand placement in protein scaffolds, followed by design of the surrounding residues to optimize binding affinity. Co-crystal structures of the highest affinity binder reveal a highly preorganized binding site, and an overall architecture and ligand placement in close agreement with the design model. We use the designs to generate plant sensors for fentanyl by coupling ligand binding to design stability. The method should be generally useful for detecting toxic hydrophobic compounds in the environment. PMID:28925919

  7. Computational design of environmental sensors for the potent opioid fentanyl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bick, Matthew J.; Greisen, Per J.; Morey, Kevin J.

    Here, we describe the computational design of proteins that bind the potent analgesic fentanyl. Our approach employs a fast docking algorithm to find shape complementary ligand placement in protein scaffolds, followed by design of the surrounding residues to optimize binding affinity. Co-crystal structures of the highest affinity binder reveal a highly preorganized binding site, and an overall architecture and ligand placement in close agreement with the design model. We also use the designs to generate plant sensors for fentanyl by coupling ligand binding to design stability. The method should be generally useful for detecting toxic hydrophobic compounds in the environment.

  8. Annual Rainfall Forecasting by Using Mamdani Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Fallah-Ghalhary, G.-A.; Habibi Nokhandan, M.; Mousavi Baygi, M.

    2009-04-01

    Long-term rainfall prediction is very important to countries thriving on agro-based economy. In general, climate and rainfall are highly non-linear phenomena in nature giving rise to what is known as "butterfly effect". The parameters that are required to predict the rainfall are enormous even for a short period. Soft computing is an innovative approach to construct computationally intelligent systems that are supposed to possess humanlike expertise within a specific domain, adapt themselves and learn to do better in changing environments, and explain how they make decisions. Unlike conventional artificial intelligence techniques the guiding principle of soft computing is to exploit tolerance for imprecision, uncertainty, robustness, partial truth to achieve tractability, and better rapport with reality. In this paper, 33 years of rainfall data analyzed in khorasan state, the northeastern part of Iran situated at latitude-longitude pairs (31°-38°N, 74°- 80°E). this research attempted to train Fuzzy Inference System (FIS) based prediction models with 33 years of rainfall data. For performance evaluation, the model predicted outputs were compared with the actual rainfall data. Simulation results reveal that soft computing techniques are promising and efficient. The test results using by FIS model showed that the RMSE was obtained 52 millimeter.

  9. A Structured-Inquiry Approach to Teaching Neurophysiology Using Computer Simulation

    PubMed Central

    Crisp, Kevin M.

    2012-01-01

    Computer simulation is a valuable tool for teaching the fundamentals of neurophysiology in undergraduate laboratories where time and equipment limitations restrict the amount of course content that can be delivered through hands-on interaction. However, students often find such exercises to be tedious and unstimulating. In an effort to engage students in the use of computational modeling while developing a deeper understanding of neurophysiology, an attempt was made to use an educational neurosimulation environment as the basis for a novel, inquiry-based research project. During the semester, students in the class wrote a research proposal, used the Neurodynamix II simulator to generate a large data set, analyzed their modeling results statistically, and presented their findings at the Midbrains Neuroscience Consortium undergraduate poster session. Learning was assessed in the form of a series of short term papers and two 10-min in-class writing responses to the open-ended question, “How do ion channels influence neuronal firing?”, which they completed on weeks 6 and 15 of the semester. Students’ answers to this question showed a deeper understanding of neuronal excitability after the project; their term papers revealed evidence of critical thinking about computational modeling and neuronal excitability. Suggestions for the adaptation of this structured-inquiry approach into shorter term lab experiences are discussed. PMID:23494064

  10. An optical flow-based state-space model of the vocal folds.

    PubMed

    Granados, Alba; Brunskog, Jonas

    2017-06-01

    High-speed movies of the vocal fold vibration are valuable data to reveal vocal fold features for voice pathology diagnosis. This work presents a suitable Bayesian model and a purely theoretical discussion for further development of a framework for continuum biomechanical features estimation. A linear and Gaussian nonstationary state-space model is proposed and thoroughly discussed. The evolution model is based on a self-sustained three-dimensional finite element model of the vocal folds, and the observation model involves a dense optical flow algorithm. The results show that the method is able to capture different deformation patterns between the computed optical flow and the finite element deformation, controlled by the choice of the model tissue parameters.

  11. Structure-preserving and rank-revealing QR-factorizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; Hansen, P.C.

    1991-11-01

    The rank-revealing QR-factorization (RRQR-factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under consideration. This makes the RRQR-factorization a useful tool in the numerical treatment of many rank-deficient problems in numerical linear algebra. In this paper, a framework is presented for the efficient implementation of RRQR algorithms, in particular, for sparse matrices. A sparse RRQR-algorithm should seek to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to capture safely the numerical rank. To this end, the paper proposes to compute an initial QR-factorization using amore » restricted pivoting strategy guarded by incremental condition estimation (ICE), and then applies the algorithm suggested by Chan and Foster to this QR-factorization. The column exchange strategy used in the initial QR factorization will exploit the fact that certain column exchanges do not change the sparsity structure, and compute a sparse QR-factorization that is a good approximation of the sought-after RRQR-factorization. Due to quantities produced by ICE, the Chan/Foster RRQR algorithm can be implemented very cheaply, thus verifying that the sought-after RRQR-factorization has indeed been computed. Experimental results on a model problem show that the initial QR-factorization is indeed very likely to produce RRQR-factorization.« less

  12. Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.

  13. Wormlike Chain Theory and Bending of Short DNA

    NASA Astrophysics Data System (ADS)

    Mazur, Alexey K.

    2007-05-01

    The probability distributions for bending angles in double helical DNA obtained in all-atom molecular dynamics simulations are compared with theoretical predictions. The computed distributions remarkably agree with the wormlike chain theory and qualitatively differ from predictions of the subelastic chain model. The computed data exhibit only small anomalies in the apparent flexibility of short DNA and cannot account for the recently reported AFM data. It is possible that the current atomistic DNA models miss some essential mechanisms of DNA bending on intermediate length scales. Analysis of bent DNA structures reveal, however, that the bending motion is structurally heterogeneous and directionally anisotropic on the length scales where the experimental anomalies were detected. These effects are essential for interpretation of the experimental data and they also can be responsible for the apparent discrepancy.

  14. Systems cell biology.

    PubMed

    Mast, Fred D; Ratushny, Alexander V; Aitchison, John D

    2014-09-15

    Systems cell biology melds high-throughput experimentation with quantitative analysis and modeling to understand many critical processes that contribute to cellular organization and dynamics. Recently, there have been several advances in technology and in the application of modeling approaches that enable the exploration of the dynamic properties of cells. Merging technology and computation offers an opportunity to objectively address unsolved cellular mechanisms, and has revealed emergent properties and helped to gain a more comprehensive and fundamental understanding of cell biology. © 2014 Mast et al.

  15. Spatial-temporal filter effect in a computer model study of ventricular fibrillation.

    PubMed

    Nowak, Claudia N; Fischer, Gerald; Wieser, Leonhard; Tilg, Bernhard; Neurauter, Andreas; Strohmenger, Hans U

    2008-08-01

    Prediction of countershock success from ventricular fibrillation (VF) ECG is a major challenge in critical care medicine. Recent findings indicate that stable, high frequency mother rotors are one possible mechanism maintaining VF. A computer model study was performed to investigate how epicardiac sources are reflected in the ECG. In the cardiac tissues of two computer models - a model with cubic geometry and a simplified torso model with a left ventricle - a mother rotor was induced by increasing the potassium rectifier current. On the epicardium, the dominant frequency (DF) map revealed a constant DF of 23 Hz (cubic model) and 24.4 Hz (torso model) in the region of the mother rotor, respectively. A sharp drop of frequency (3-18 Hz in the cubic model and 12.4-18 Hz in the torso model) occurred in the surrounding epicardial tissue of chaotic fibrillatory conduction. While no organized pattern was observable on the body surface of the cubic model, the mother rotor frequency can be identified in the anterior surface of the torso model because of the chosen position of the mother rotor in the ventricle (shortest distance to the body surface). Nevertheless, the DFs were damped on the body surfaces of both models (4.6-8.5 Hz in the cubic model and 14.4-16.4 Hz in the torso model). Thus, it was shown in this computer model study that wave propagation transforms the spatial low pass filtering of the thorax into a temporal low pass. In contrast to the resistive-capacitive low pass filter formed by the tissue, this spatial-temporal low pass filter becomes effective at low frequencies (tens of Hertz). This effect damps the high frequency components arising from the heart and it hampers a direct observation of rapid, organized sources of VF in the ECGs, when in an emergency case an artifact-free recording is not possible.

  16. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  17. Standardizing clinical trials workflow representation in UML for international site comparison.

    PubMed

    de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M O; Rodrigues, Maria J; Shah, Jatin; Loures, Marco R; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo

    2010-11-09

    With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.

  18. Standardizing Clinical Trials Workflow Representation in UML for International Site Comparison

    PubMed Central

    de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M. O.; Rodrigues, Maria J.; Shah, Jatin; Loures, Marco R.; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo

    2010-01-01

    Background With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Methods Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Results Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. Conclusions This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows. PMID:21085484

  19. Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele

    2014-11-01

    Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less

  20. Explicit validation of a surface shortwave radiation balance model over snow-covered complex terrain

    NASA Astrophysics Data System (ADS)

    Helbig, N.; Löwe, H.; Mayer, B.; Lehning, M.

    2010-09-01

    A model that computes the surface radiation balance for all sky conditions in complex terrain is presented. The spatial distribution of direct and diffuse sky radiation is determined from observations of incident global radiation, air temperature, and relative humidity at a single measurement location. Incident radiation under cloudless sky is spatially derived from a parameterization of the atmospheric transmittance. Direct and diffuse sky radiation for all sky conditions are obtained by decomposing the measured global radiation value. Spatial incident radiation values under all atmospheric conditions are computed by adjusting the spatial radiation values obtained from the parametric model with the radiation components obtained from the decomposition model at the measurement site. Topographic influences such as shading are accounted for. The radiosity approach is used to compute anisotropic terrain reflected radiation. Validations of the shortwave radiation balance model are presented in detail for a day with cloudless sky. For a day with overcast sky a first validation is presented. Validation of a section of the horizon line as well as of individual radiation components is performed with high-quality measurements. A new measurement setup was designed to determine terrain reflected radiation. There is good agreement between the measurements and the modeled terrain reflected radiation values as well as with incident radiation values. A comparison of the model with a fully three-dimensional radiative transfer Monte Carlo model is presented. That validation reveals a good agreement between modeled radiation values.

  1. Simulation of turbulent separated flows using a novel, evolution-based, eddy-viscosity formulation

    NASA Astrophysics Data System (ADS)

    Castellucci, Paul

    Currently, there exists a lack of confidence in the computational simulation of turbulent separated flows at large Reynolds numbers. The most accurate methods available are too computationally costly to use in engineering applications. Thus, inexpensive models, developed using the Reynolds-averaged Navier-Stokes (RANS) equations, are often extended beyond their applicability. Although these methods will often reproduce integrated quantities within engineering tolerances, such metrics are often insensitive to details within a separated wake, and therefore, poor indicators of simulation fidelity. Using concepts borrowed from large-eddy simulation (LES), a two-equation RANS model is modified to simulate the turbulent wake behind a circular cylinder. This modification involves the computation of one additional scalar field, adding very little to the overall computational cost. When properly inserted into the baseline RANS model, this modification mimics LES in the separated wake, yet reverts to the unmodified form at the cylinder surface. In this manner, superior predictive capability may be achieved without the additional cost of fine spatial resolution associated with LES near solid boundaries. Simulations using modified and baseline RANS models are benchmarked against both LES and experimental data for a circular cylinder wake at Reynolds number 3900. In addition, the computational tool used in this investigation is subject to verification via the Method of Manufactured Solutions. Post-processing of the resultant flow fields includes both mean value and triple-decomposition analysis. These results reveal substantial improvements using the modified system and appear to drive the baseline wake solution toward that of LES, as intended.

  2. Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2015-11-01

    Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.

  3. Computational model for chromosomal instabilty

    NASA Astrophysics Data System (ADS)

    Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina

    2015-03-01

    Faithful segregation of genetic material during cell division requires alignment of the chromosomes between the spindle poles and attachment of their kinetochores to each of the poles. Failure of these complex dynamical processes leads to chromosomal instability (CIN), a characteristic feature of several diseases including cancer. While a multitude of biological factors regulating chromosome congression and bi-orientation have been identified, it is still unclear how they are integrated into a coherent picture. Here we address this issue by a three dimensional computational model of motor-driven chromosome congression and bi-orientation. Our model reveals that successful cell division requires control of the total number of microtubules: if this number is too small bi-orientation fails, while if it is too large not all the chromosomes are able to congress. The optimal number of microtubules predicted by our model compares well with early observations in mammalian cell spindles. Our results shed new light on the origin of several pathological conditions related to chromosomal instability.

  4. Fibrillatory conduction in branching atrial tissue--Insight from volumetric and monolayer computer models.

    PubMed

    Wieser, L; Fischer, G; Nowak, C N; Tilg, B

    2007-05-01

    Increased local load in branching atrial tissue (muscle fibers and bundle insertions) influences wave propagation during atrial fibrillation (AF). This computer model study reveals two principal phenomena: if the branching is distant from the driving rotor (>19 mm), the load causes local slowing of conduction or wavebreaks. If the driving rotor is close to the branching, the increased load causes first a slow drift of the rotor towards the branching. Finally, the rotor anchors, and a stable, repeatable pattern of activation can be observed. Variation of the bundle geometry from a cylindrical, volumetric structure to a flat strip of a comparable load in a monolayer model changed the local activation sequence in the proximity of the bundle. However, the global behavior and the basic effects are similar in all models. Wavebreaks in branching tissue contribute to the chaotic nature of AF (fibrillatory conduction). The stabilization (anchoring) of driving rotors by branching tissue might contribute to maintain sustained AF.

  5. Effect of carbon black on temperature field and weld profile during laser transmission welding of polymers: A FEM study

    NASA Astrophysics Data System (ADS)

    Acherjee, Bappa; Kuar, Arunanshu S.; Mitra, Souren; Misra, Dipten

    2012-04-01

    The influence of the carbon black on temperature distribution and weld profile, during laser transmission welding of polymers, is investigated in the present research work. A transient numerical model, based on conduction mode heat transfer, is developed to analyze the process. The heat input to the model is considered to be the volumetric Gaussian heat source. The computation of temperature field during welding is carried out for polycarbonates having different proportion of carbon black in polymer matrix. The temperature dependent material properties of polycarbonate are taken into account for modeling. The finite element code ANSYS ® is employed to obtain the numerical results. The numerically computed results of weld pool dimensions are compared with the experimental results. The comparison shows a fair agreement between them, which gives confidence to use the developed model for intended investigation with acceptable accuracy. The results obtained have revealed that the carbon black has considerable influence on the temperature field distribution and the formation of the weld pool geometry.

  6. Computed versus measured ion velocity distribution functions in a Hall effect thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrigues, L.; CNRS, LAPLACE, F-31062 Toulouse; Mazouffre, S.

    2012-06-01

    We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion meanmore » velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.« less

  7. Large-scale functional models of visual cortex for remote sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less

  8. Directional harmonic theory: a computational Gestalt model to account for illusory contour and vertex formation.

    PubMed

    Lehar, Steven

    2003-01-01

    Visual illusions and perceptual grouping phenomena offer an invaluable tool for probing the computational mechanism of low-level visual processing. Some illusions, like the Kanizsa figure, reveal illusory contours that form edges collinear with the inducing stimulus. This kind of illusory contour has been modeled by neural network models by way of cells equipped with elongated spatial receptive fields designed to detect and complete the collinear alignment. There are, however, other illusory groupings which are not so easy to account for in neural network terms. The Ehrenstein illusion exhibits an illusory contour that forms a contour orthogonal to the stimulus instead of collinear with it. Other perceptual grouping effects reveal illusory contours that exhibit a sharp corner or vertex, and still others take the form of vertices defined by the intersection of three, four, or more illusory contours that meet at a point. A direct extension of the collinear completion models to account for these phenomena tends towards a combinatorial explosion, because it would suggest cells with specialized receptive fields configured to perform each of those completion types, each of which would have to be replicated at every location and every orientation across the visual field. These phenomena therefore challenge the adequacy of the neural network approach to account for these diverse perceptual phenomena. I have proposed elsewhere an alternative paradigm of neurocomputation in the harmonic resonance theory (Lehar 1999, see website), whereby pattern recognition and completion are performed by spatial standing waves across the neural substrate. The standing waves perform a computational function analogous to that of the spatial receptive fields of the neural network approach, except that, unlike that paradigm, a single resonance mechanism performs a function equivalent to a whole array of spatial receptive fields of different spatial configurations and of different orientations, and thereby avoids the combinatorial explosion inherent in the older paradigm. The present paper presents the directional harmonic model, a more specific development of the harmonic resonance theory, designed to account for specific perceptual grouping phenomena. Computer simulations of the directional harmonic model show that it can account for collinear contours as observed in the Kanizsa figure, orthogonal contours as seen in the Ehrenstein illusion, and a number of illusory vertex percepts composed of two, three, or more illusory contours that meet in a variety of configurations.

  9. Final Report: PAGE: Policy Analytics Generation Engine

    DTIC Science & Technology

    2016-08-12

    develop a parallel framework for it. We also developed policies and methods by which a group of defensive resources (e.g. checkpoints) could be...Sarit Kraus. Learning to Reveal Information in Repeated Human -Computer Negotiation, Human -Agent Interaction Design and Models Workshop 2012. 04-JUN...Joseph Keshet, Sarit Kraus. Predicting Human Strategic Decisions Using Facial Expressions, International Joint Conference on Artificial

  10. A study of deoxyribonucleotide metabolism and its relation to DNA synthesis. Supercomputer simulation and model-system analysis.

    PubMed

    Heinmets, F; Leary, R H

    1991-06-01

    A model system (1) was established to analyze purine and pyrimidine metabolism. This system has been expanded to include macrosimulation of DNA synthesis and the study of its regulation by terminal deoxynucleoside triphosphates (dNTPs) via a complex set of interactions. Computer experiments reveal that our model exhibits adequate and reasonable sensitivity in terms of dNTP pool levels and rates of DNA synthesis when inputs to the system are varied. These simulation experiments reveal that in order to achieve maximum DNA synthesis (in terms of purine metabolism), a proper balance is required in guanine and adenine input into this metabolic system. Excessive inputs will become inhibitory to DNA synthesis. In addition, studies are carried out on rates of DNA synthesis when various parameters are changed quantitatively. The current system is formulated by 110 differential equations.

  11. Rapid prototyping to design a customized locking plate for pancarpal arthrodesis in a giant breed dog.

    PubMed

    Petazzoni, M; Nicetto, T

    2014-01-01

    This report describes the treatment of traumatic carpal hyperextension in a giant breed dog by pancarpal arthrodesis using a custom-made Fixin locking plate, created with the aid of a three-dimensional plastic model of the bones of the antebrachium produced by rapid prototyping technology. A three-year-old 104 kg male Mastiff dog was admitted for treatment of carpal hyperextension injury. After diagnosis of carpal instability, surgery was recommended. Computed tomography images were used to create a life-size three-dimensional plastic model of the forelimb. The model was used as the basis for constructing a customized 12-hole Fixin locking plate. The plate was used to attain successful pancarpal arthrodesis in the animal. Radiographic examination after 74 and 140 days revealed signs of osseous union of the arthrodesis. Further clinical and radiographic follow-up examination three years later did not reveal any changes in implant position or complications.

  12. Rapid prototyping in aortic surgery.

    PubMed

    Bangeas, Petros; Voulalas, Grigorios; Ktenidis, Kiriakos

    2016-04-01

    3D printing provides the sequential addition of material layers and, thus, the opportunity to print parts and components made of different materials with variable mechanical and physical properties. It helps us create 3D anatomical models for the better planning of surgical procedures when needed, since it can reveal any complex anatomical feature. Images of abdominal aortic aneurysms received by computed tomographic angiography were converted into 3D images using a Google SketchUp free software and saved in stereolithography format. Using a 3D printer (Makerbot), a model made of polylactic acid material (thermoplastic filament) was printed. A 3D model of an abdominal aorta aneurysm was created in 138 min, while the model was a precise copy of the aorta visualized in the computed tomographic images. The total cost (including the initial cost of the printer) reached 1303.00 euros. 3D imaging and modelling using different materials can be very useful in cases when anatomical difficulties are recognized through the computed tomographic images and a tactile approach is demanded preoperatively. In this way, major complications during abdominal aorta aneurysm management can be predicted and prevented. Furthermore, the model can be used as a mould; the development of new, more biocompatible, less antigenic and individualized can become a challenge in the future. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  13. Motion Planning in a Society of Intelligent Mobile Agents

    NASA Technical Reports Server (NTRS)

    Esterline, Albert C.; Shafto, Michael (Technical Monitor)

    2002-01-01

    The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.

  14. Revealing the distribution of transmembrane currents along the dendritic tree of a neuron from extracellular recordings

    PubMed Central

    Cserpán, Dorottya; Meszéna, Domokos; Wittner, Lucia; Tóth, Kinga; Ulbert, István; Somogyvári, Zoltán

    2017-01-01

    Revealing the current source distribution along the neuronal membrane is a key step on the way to understanding neural computations; however, the experimental and theoretical tools to achieve sufficient spatiotemporal resolution for the estimation remain to be established. Here, we address this problem using extracellularly recorded potentials with arbitrarily distributed electrodes for a neuron of known morphology. We use simulations of models with varying complexity to validate the proposed method and to give recommendations for experimental applications. The method is applied to in vitro data from rat hippocampus. PMID:29148974

  15. Modeling Hubble Space Telescope flight data by Q-Markov cover identification

    NASA Technical Reports Server (NTRS)

    Liu, K.; Skelton, R. E.; Sharkey, J. P.

    1992-01-01

    A state space model for the Hubble Space Telescope under the influence of unknown disturbances in orbit is presented. This model was obtained from flight data by applying the Q-Markov covariance equivalent realization identification algorithm. This state space model guarantees the match of the first Q-Markov parameters and covariance parameters of the Hubble system. The flight data were partitioned into high- and low-frequency components for more efficient Q-Markov cover modeling, to reduce some computational difficulties of the Q-Markov cover algorithm. This identification revealed more than 20 lightly damped modes within the bandwidth of the attitude control system. Comparisons with the analytical (TREETOPS) model are also included.

  16. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic processes increased their agreement considerably.

  17. Computing volume potentials for noninvasive imaging of cardiac excitation.

    PubMed

    van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W

    2015-03-01

    In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.

  18. Assessment of flat rolling theories for the use in a model-based controller for high-precision rolling applications

    NASA Astrophysics Data System (ADS)

    Stockert, Sven; Wehr, Matthias; Lohmar, Johannes; Abel, Dirk; Hirt, Gerhard

    2017-10-01

    In the electrical and medical industries the trend towards further miniaturization of devices is accompanied by the demand for smaller manufacturing tolerances. Such industries use a plentitude of small and narrow cold rolled metal strips with high thickness accuracy. Conventional rolling mills can hardly achieve further improvement of these tolerances. However, a model-based controller in combination with an additional piezoelectric actuator for high dynamic roll adjustment is expected to enable the production of the required metal strips with a thickness tolerance of +/-1 µm. The model-based controller has to be based on a rolling theory which can describe the rolling process very accurately. Additionally, the required computing time has to be low in order to predict the rolling process in real-time. In this work, four rolling theories from literature with different levels of complexity are tested for their suitability for the predictive controller. Rolling theories of von Kármán, Siebel, Bland & Ford and Alexander are implemented in Matlab and afterwards transferred to the real-time computer used for the controller. The prediction accuracy of these theories is validated using rolling trials with different thickness reduction and a comparison to the calculated results. Furthermore, the required computing time on the real-time computer is measured. Adequate results according the prediction accuracy can be achieved with the rolling theories developed by Bland & Ford and Alexander. A comparison of the computing time of those two theories reveals that Alexander's theory exceeds the sample rate of 1 kHz of the real-time computer.

  19. Man not a machine: Models, minds, and mental labor, c.1980.

    PubMed

    Stadler, Max

    2017-01-01

    This essay is concerned with the fate of the so-called "computer metaphor" of the mind in the age of mass computing. As such, it is concerned with the ways the mighty metaphor of the rational, rule-based, and serial "information processor," which dominated neurological and psychological theorizing in the early post-WW2 era, came apart during the 1970s and 1980s; and how it was, step by step, replaced by a set of model entities more closely in tune with the significance that was now discerned in certain kinds of "everyday practical action" as the ultimate manifestation of the human mind. By taking a closer look at the ailments and promises of the so-called postindustrial age and more specifically, at the "hazards" associated with the introduction of computers into the workplace, it is shown how models and visions of the mind responded to this new state of affairs. It was in this context-the transformations of mental labor, c.1980-my argument goes, that the minds of men and women revealed themselves to be not so much like computing machines, as the "classic" computer metaphor of the mind, which had birthed the "cognitive revolution" of the 1950s and 1960s, once had it; they were positively unlike them. Instead of "rules" or "symbol manipulation," the minds of computer-equipped brainworkers thus evoked a different set of metaphors: at stake in postindustrial cognition, as this essay argues, was something "parallel," "tacit," and "embodied and embedded." © 2017 Elsevier B.V. All rights reserved.

  20. Integrating Laptop Computers into Classroom: Attitudes, Needs, and Professional Development of Science Teachers—A Case Study

    NASA Astrophysics Data System (ADS)

    Klieger, Aviva; Ben-Hur, Yehuda; Bar-Yossef, Nurit

    2010-04-01

    The study examines the professional development of junior-high-school teachers participating in the Israeli "Katom" (Computer for Every Class, Student and Teacher) Program, begun in 2004. A three-circle support and training model was developed for teachers' professional development. The first circle applies to all teachers in the program; the second, to all teachers at individual schools; the third to teachers of specific disciplines. The study reveals and describes the attitudes of science teachers to the integration of laptop computers and to the accompanying professional development model. Semi-structured interviews were conducted with eight science teachers from the four schools participating in the program. The interviews were analyzed according to the internal relational framework taken from the information that arose from the interviews. Two factors influenced science teachers' professional development: (1) Introduction of laptops to the teachers and students. (2) The support and training system. Interview analysis shows that the disciplinary training is most relevant to teachers and they are very interested in belonging to the professional science teachers' community. They also prefer face-to-face meetings in their school. Among the difficulties they noted were the new learning environment, including control of student computers, computer integration in laboratory work and technical problems. Laptop computers contributed significantly to teachers' professional and personal development and to a shift from teacher-centered to student-centered teaching. One-to-One laptops also changed the schools' digital culture. The findings are important for designing concepts and models for professional development when introducing technological innovation into the educational system.

  1. Computational Aerodynamic Simulations of a 1215 ft/sec Tip Speed Transonic Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of a 1215 ft/sec tip speed transonic fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which for this model did not include a split flow path with core and bypass ducts. As a result, it was only necessary to adjust fan rotational speed in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the flow fields at all operating conditions reveals no excessive boundary layer separations or related secondary-flow problems.

  2. Ribbon synapses compute temporal contrast and encode luminance in retinal rod bipolar cells

    PubMed Central

    Oesch, Nicholas W.; Diamond, Jeffrey S.

    2011-01-01

    Contrast is computed throughout the nervous system to encode changing inputs efficiently. The retina encodes luminance and contrast over a wide range of visual conditions and so must adapt its responses to maintain sensitivity and avoid saturation. Here we show how one type of adaptation allows individual synapses to compute contrast and encode luminance in biphasic responses to step changes in light levels. Light-evoked depletion of the readily releasable vesicle pool (RRP) at rod bipolar cell (RBC) ribbon synapses in rat retina limits the dynamic range available to encode transient but not sustained responses, thereby allowing the transient and sustained components of release to compute temporal contrast and encode mean light levels, respectively. A release/replenishment model shows that a single, homogeneous pool of synaptic vesicles is sufficient to generate this behavior and reveals that the dominant mechanism shaping the biphasic contrast/luminance response is the partial depletion of the RRP. PMID:22019730

  3. The Implementation of Blended Learning Using Android-Based Tutorial Video in Computer Programming Course II

    NASA Astrophysics Data System (ADS)

    Huda, C.; Hudha, M. N.; Ain, N.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Computer programming course is theoretical. Sufficient practice is necessary to facilitate conceptual understanding and encouraging creativity in designing computer programs/animation. The development of tutorial video in an Android-based blended learning is needed for students’ guide. Using Android-based instructional material, students can independently learn anywhere and anytime. The tutorial video can facilitate students’ understanding about concepts, materials, and procedures of programming/animation making in detail. This study employed a Research and Development method adapting Thiagarajan’s 4D model. The developed Android-based instructional material and tutorial video were validated by experts in instructional media and experts in physics education. The expert validation results showed that the Android-based material was comprehensive and very feasible. The tutorial video was deemed feasible as it received average score of 92.9%. It was also revealed that students’ conceptual understanding, skills, and creativity in designing computer program/animation improved significantly.

  4. An esthetics rehabilitation with computer-aided design/ computer-aided manufacturing technology.

    PubMed

    Mazaro, Josá Vitor Quinelli; de Mello, Caroline Cantieri; Zavanelli, Adriana Cristina; Santiago, Joel Ferreira; Amoroso, Andressa Paschoal; Pellizzer, Eduardo Piza

    2014-07-01

    This paper describes a case of a rehabilitation involving Computer Aided Design/Computer Aided Manufacturing (CAD-CAM) system in implant supported and dental supported prostheses using zirconia as framework. The CAD-CAM technology has developed considerably over last few years, becoming a reality in dental practice. Among the widely used systems are the systems based on zirconia which demonstrate important physical and mechanical properties of high strength, adequate fracture toughness, biocompatibility and esthetics, and are indicated for unitary prosthetic restorations and posterior and anterior framework. All the modeling was performed by using CAD-CAM system and prostheses were cemented using resin cement best suited for each situation. The rehabilitation of the maxillary arch using zirconia framework demonstrated satisfactory esthetic and functional results after a 12-month control and revealed no biological and technical complications. This article shows the important of use technology CAD/CAM in the manufacture of dental prosthesis and implant-supported.

  5. A novel quantum scheme for secure two-party distance computation

    NASA Astrophysics Data System (ADS)

    Peng, Zhen-wan; Shi, Run-hua; Zhong, Hong; Cui, Jie; Zhang, Shun

    2017-12-01

    Secure multiparty computational geometry is an essential field of secure multiparty computation, which computes a computation geometric problem without revealing any private information of each party. Secure two-party distance computation is a primitive of secure multiparty computational geometry, which computes the distance between two points without revealing each point's location information (i.e., coordinate). Secure two-party distance computation has potential applications with high secure requirements in military, business, engineering and so on. In this paper, we present a quantum solution to secure two-party distance computation by subtly using quantum private query. Compared to the classical related protocols, our quantum protocol can ensure higher security and better privacy protection because of the physical principle of quantum mechanics.

  6. Untangling the complexity of blood coagulation network: use of computational modelling in pharmacology and diagnostics.

    PubMed

    Shibeko, Alexey M; Panteleev, Mikhail A

    2016-05-01

    Blood coagulation is a complex biochemical network that plays critical roles in haemostasis (a physiological process that stops bleeding on injury) and thrombosis (pathological vessel occlusion). Both up- and down-regulation of coagulation remain a major challenge for modern medicine, with the ultimate goal to correct haemostasis without causing thrombosis and vice versa. Mathematical/computational modelling is potentially an important tool for understanding blood coagulation disorders and their treatment. It can save a huge amount of time and resources, and provide a valuable alternative or supplement when clinical studies are limited, or not ethical, or technically impossible. This article reviews contemporary state of the art in the modelling of blood coagulation for practical purposes: to reveal the molecular basis of a disease, to understand mechanisms of drug action, to predict pharmacodynamics and drug-drug interactions, to suggest potential drug targets or to improve quality of diagnostics. Different model types and designs used for this are discussed. Functional mechanisms of procoagulant bypassing agents and investigations of coagulation inhibitors were the two particularly popular applications of computational modelling that gave non-trivial results. Yet, like any other tool, modelling has its limitations, mainly determined by insufficient knowledge of the system, uncertainty and unreliability of complex models. We show how to some extent this can be overcome and discuss what can be expected from the mathematical modelling of coagulation in not-so-far future. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. A Model-Based Approach to Trial-By-Trial P300 Amplitude Fluctuations

    PubMed Central

    Kolossa, Antonio; Fingscheidt, Tim; Wessel, Karl; Kopp, Bruno

    2013-01-01

    It has long been recognized that the amplitude of the P300 component of event-related brain potentials is sensitive to the degree to which eliciting stimuli are surprising to the observers (Donchin, 1981). While Squires et al. (1976) showed and modeled dependencies of P300 amplitudes from observed stimuli on various time scales, Mars et al. (2008) proposed a computational model keeping track of stimulus probabilities on a long-term time scale. We suggest here a computational model which integrates prior information with short-term, long-term, and alternation-based experiential influences on P300 amplitude fluctuations. To evaluate the new model, we measured trial-by-trial P300 amplitude fluctuations in a simple two-choice response time task, and tested the computational models of trial-by-trial P300 amplitudes using Bayesian model evaluation. The results reveal that the new digital filtering (DIF) model provides a superior account of the trial-by-trial P300 amplitudes when compared to both Squires et al.’s (1976) model, and Mars et al.’s (2008) model. We show that the P300-generating system can be described as two parallel first-order infinite impulse response (IIR) low-pass filters and an additional fourth-order finite impulse response (FIR) high-pass filter. Implications of the acquired data are discussed with regard to the neurobiological distinction between short-term, long-term, and working memory as well as from the point of view of predictive coding models and Bayesian learning theories of cortical function. PMID:23404628

  8. Model of Cortical Organization Embodying a Basis for a Theory of Information Processing and Memory Recall

    NASA Astrophysics Data System (ADS)

    Shaw, Gordon L.; Silverman, Dennis J.; Pearson, John C.

    1985-04-01

    Motivated by V. B. Mountcastle's organizational principle for neocortical function, and by M. E. Fisher's model of physical spin systems, we introduce a cooperative model of the cortical column incorporating an idealized substructure, the trion, which represents a localized group of neurons. Computer studies reveal that typical networks composed of a small number of trions (with symmetric interactions) exhibit striking behavior--e.g., hundreds to thousands of quasi-stable, periodic firing patterns, any of which can be selected out and enhanced with only small changes in interaction strengths by using a Hebb-type algorithm.

  9. Preverbal and verbal counting and computation.

    PubMed

    Gallistel, C R; Gelman, R

    1992-08-01

    We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.

  10. Anti-diarrheal activity of (-)-epicatechin from Chiranthodendron pentadactylon Larreat: experimental and computational studies.

    PubMed

    Velázquez, Claudia; Correa-Basurto, José; Garcia-Hernandez, Normand; Barbosa, Elizabeth; Tesoro-Cruz, Emiliano; Calzada, Samuel; Calzada, Fernando

    2012-09-28

    Chiranthodendron pentadactylon Larreat is frequently used in Mexican traditional medicine as well as in Guatemalan for several medicinal purposes, including their use in the control of diarrhea. This work was undertaken to obtain additional information that support the traditional use of Chiranthodendron pentadactylon Larreat, on pharmacological basis using the major antisecretory isolated compound from computational, in vitro and in vivo experiments. (-)-Epicatechin was isolated from ethyl acetate fraction of the plant crude extract. In vivo toxin (Vibrio cholera or Escherichia coli)-induced intestinal secretion in rat jejunal loops models and sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis on Vibrio cholera toxin were used in experimental studies while the molecular docking technique was used to conduct computational study. The antisecretory activity of epicatechin was tested against Vibrio cholera and Escherichia coli toxins at oral dose 10 mg/kg in the rat model. It exhibited the most potent activity on Vibrio cholera toxin (56.9% of inhibition). In the case of Escherichia coli toxin its effect was moderate (24.1% of inhibition). SDS-PAGE analysis revealed that both (-)-epicatechin and Chiranthodendron pentadactylon extract interacted with the Vibrio cholera toxin at concentration from 80 μg/mL and 300 μg/mL, respectively. Computational molecular docking showed that epicatechin interacted with four amino acid residues (Asn 103, Phe 31, Phe 223 and The 78) in the catalytic site of Vibrio cholera toxin, revealing its potential binding mode at molecular level. The results derived from computational, in vitro and in vivo experiments on Vibrio cholera and Escherichia coli toxins confirm the potential of epicatechin as a new antisecretory compound and give additional scientific support to anecdotal use of Chiranthodendron pentadactylon Larreat in Mexican traditional medicine to treat gastrointestinal disorders such as diarrhea. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's Disease diagnosis.

    PubMed

    Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M; Ramírez, Javier

    2015-01-01

    Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the usefulness of the Sparse Gaussian Graphical models to reveal functional and structural connectivity patterns. This information provided by the sparse inverse covariance matrices is not only used in an exploratory way but we also propose a method to use it in a discriminative way. Regression coefficients are used to compute reconstruction errors for the different classes that are then introduced in a SVM for classification. Classification experiments performed using 68 Controls, 70 AD, and 111 MCI images and assessed by cross-validation show the effectiveness of the proposed method.

  12. Investigating the impact of spatial priors on the performance of model-based IVUS elastography

    PubMed Central

    Richards, M S; Doyley, M M

    2012-01-01

    This paper describes methods that provide pre-requisite information for computing circumferential stress in modulus elastograms recovered from vascular tissue—information that could help cardiologists detect life-threatening plaques and predict their propensity to rupture. The modulus recovery process is an ill-posed problem; therefore additional information is needed to provide useful elastograms. In this work, prior geometrical information was used to impose hard or soft constraints on the reconstruction process. We conducted simulation and phantom studies to evaluate and compare modulus elastograms computed with soft and hard constraints versus those computed without any prior information. The results revealed that (1) the contrast-to-noise ratio of modulus elastograms achieved using the soft prior and hard prior reconstruction methods exceeded those computed without any prior information; (2) the soft prior and hard prior reconstruction methods could tolerate up to 8 % measurement noise; and (3) the performance of soft and hard prior modulus elastogram degraded when incomplete spatial priors were employed. This work demonstrates that including spatial priors in the reconstruction process should improve the performance of model-based elastography, and the soft prior approach should enhance the robustness of the reconstruction process to errors in the geometrical information. PMID:22037648

  13. A Combination of Hand-held Models and Computer Imaging Programs Helps Students Answer Oral Questions about Molecular Structure and Function: A Controlled Investigation of Student Learning

    PubMed Central

    Peck, Ronald F.; Colton, Shannon; Morris, Jennifer; Chaibub Neto, Elias; Kallio, Julie

    2009-01-01

    We conducted a controlled investigation to examine whether a combination of computer imagery and tactile tools helps introductory cell biology laboratory undergraduate students better learn about protein structure/function relationships as compared with computer imagery alone. In all five laboratory sections, students used the molecular imaging program, Protein Explorer (PE). In the three experimental sections, three-dimensional physical models were made available to the students, in addition to PE. Student learning was assessed via oral and written research summaries and videotaped interviews. Differences between the experimental and control group students were not found in our typical course assessments such as research papers, but rather were revealed during one-on-one interviews with students at the end of the semester. A subset of students in the experimental group produced superior answers to some higher-order interview questions as compared with students in the control group. During the interview, students in both groups preferred to use either the hand-held models alone or in combination with the PE imaging program. Students typically did not use any tools when answering knowledge (lower-level thinking) questions, but when challenged with higher-level thinking questions, students in both the control and experimental groups elected to use the models. PMID:19255134

  14. Time dependent neural network models for detecting changes of state in complex processes: applications in earth sciences and astronomy.

    PubMed

    Valdés, Julio J; Bonham-Carter, Graeme

    2006-03-01

    A computational intelligence approach is used to explore the problem of detecting internal state changes in time dependent processes; described by heterogeneous, multivariate time series with imprecise data and missing values. Such processes are approximated by collections of time dependent non-linear autoregressive models represented by a special kind of neuro-fuzzy neural network. Grid and high throughput computing model mining procedures based on neuro-fuzzy networks and genetic algorithms, generate: (i) collections of models composed of sets of time lag terms from the time series, and (ii) prediction functions represented by neuro-fuzzy networks. The composition of the models and their prediction capabilities, allows the identification of changes in the internal structure of the process. These changes are associated with the alternation of steady and transient states, zones with abnormal behavior, instability, and other situations. This approach is general, and its sensitivity for detecting subtle changes of state is revealed by simulation experiments. Its potential in the study of complex processes in earth sciences and astrophysics is illustrated with applications using paleoclimate and solar data.

  15. Computational Models Reveal a Passive Mechanism for Cell Migration in the Crypt

    PubMed Central

    Dunn, Sara-Jane; Näthke, Inke S.; Osborne, James M.

    2013-01-01

    Cell migration in the intestinal crypt is essential for the regular renewal of the epithelium, and the continued upward movement of cells is a key characteristic of healthy crypt dynamics. However, the driving force behind this migration is unknown. Possibilities include mitotic pressure, active movement driven by motility cues, or negative pressure arising from cell loss at the crypt collar. It is possible that a combination of factors together coordinate migration. Here, three different computational models are used to provide insight into the mechanisms that underpin cell movement in the crypt, by examining the consequence of eliminating cell division on cell movement. Computational simulations agree with existing experimental results, confirming that migration can continue in the absence of mitosis. Importantly, however, simulations allow us to infer mechanisms that are sufficient to generate cell movement, which is not possible through experimental observation alone. The results produced by the three models agree and suggest that cell loss due to apoptosis and extrusion at the crypt collar relieves cell compression below, allowing cells to expand and move upwards. This finding suggests that future experiments should focus on the role of apoptosis and cell extrusion in controlling cell migration in the crypt. PMID:24260407

  16. Deformation of Soft Tissue and Force Feedback Using the Smoothed Particle Hydrodynamics

    PubMed Central

    Liu, Xuemei; Wang, Ruiyi; Li, Yunhua; Song, Dongdong

    2015-01-01

    We study the deformation and haptic feedback of soft tissue in virtual surgery based on a liver model by using a force feedback device named PHANTOM OMNI developed by SensAble Company in USA. Although a significant amount of research efforts have been dedicated to simulating the behaviors of soft tissue and implementing force feedback, it is still a challenging problem. This paper introduces a kind of meshfree method for deformation simulation of soft tissue and force computation based on viscoelastic mechanical model and smoothed particle hydrodynamics (SPH). Firstly, viscoelastic model can present the mechanical characteristics of soft tissue which greatly promotes the realism. Secondly, SPH has features of meshless technique and self-adaption, which supply higher precision than methods based on meshes for force feedback computation. Finally, a SPH method based on dynamic interaction area is proposed to improve the real time performance of simulation. The results reveal that SPH methodology is suitable for simulating soft tissue deformation and force feedback calculation, and SPH based on dynamic local interaction area has a higher computational efficiency significantly compared with usual SPH. Our algorithm has a bright prospect in the area of virtual surgery. PMID:26417380

  17. Development of a New Methodology for Computing Surface Sensible Heat Fluxes using Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Morrison, T. J.; Calaf, M.; Fernando, H. J.; Price, T. A.; Pardyjak, E.

    2017-12-01

    Current numerical weather predication models utilize similarity to characterize momentum, moisture, and heat fluxes. Such formulations are only valid under the ideal assumptions of spatial homogeneity, statistical stationary, and zero subsidence. However, recent surface temperature measurements from the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program on the Salt Flats of Utah's West desert, show that even under the most a priori ideal conditions, heterogeneity of the aforementioned variables exists. We present a new method to extract spatially-distributed measurements of surface sensible heat flux from thermal imagery. The approach consists of using a surface energy budget, where the ground heat flux is easily computed from limited measurements using a force-restore-type methodology, the latent heat fluxes are neglected, and the energy storage is computed using a lumped capacitance model. Preliminary validation of the method is presented using experimental data acquired from a nearby sonic anemometer during the MATERHORN campaign. Additional evaluation is required to confirm the method's validity. Further decomposition analysis of on-site instrumentation (thermal camera, cold-hotwire probes, and sonic anemometers) using Proper Orthogonal Decomposition (POD), and wavelet analysis, reveals time scale similarity between the flow and surface fluctuations.

  18. Computational modelling of large deformations in layered-silicate/PET nanocomposites near the glass transition

    NASA Astrophysics Data System (ADS)

    Figiel, Łukasz; Dunne, Fionn P. E.; Buckley, C. Paul

    2010-01-01

    Layered-silicate nanoparticles offer a cost-effective reinforcement for thermoplastics. Computational modelling has been employed to study large deformations in layered-silicate/poly(ethylene terephthalate) (PET) nanocomposites near the glass transition, as would be experienced during industrial forming processes such as thermoforming or injection stretch blow moulding. Non-linear numerical modelling was applied, to predict the macroscopic large deformation behaviour, with morphology evolution and deformation occurring at the microscopic level, using the representative volume element (RVE) approach. A physically based elasto-viscoplastic constitutive model, describing the behaviour of the PET matrix within the RVE, was numerically implemented into a finite element solver (ABAQUS) using an UMAT subroutine. The implementation was designed to be robust, for accommodating large rotations and stretches of the matrix local to, and between, the nanoparticles. The nanocomposite morphology was reconstructed at the RVE level using a Monte-Carlo-based algorithm that placed straight, high-aspect ratio particles according to the specified orientation and volume fraction, with the assumption of periodicity. Computational experiments using this methodology enabled prediction of the strain-stiffening behaviour of the nanocomposite, observed experimentally, as functions of strain, strain rate, temperature and particle volume fraction. These results revealed the probable origins of the enhanced strain stiffening observed: (a) evolution of the morphology (through particle re-orientation) and (b) early onset of stress-induced pre-crystallization (and hence lock-up of viscous flow), triggered by the presence of particles. The computational model enabled prediction of the effects of process parameters (strain rate, temperature) on evolution of the morphology, and hence on the end-use properties.

  19. Template construction grammar: from visual scene description to language comprehension and agrammatism.

    PubMed

    Barrès, Victor; Lee, Jinyong

    2014-01-01

    How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.

  20. A CFD-informed quasi-steady model of flapping wing aerodynamics.

    PubMed

    Nakata, Toshiyuki; Liu, Hao; Bomphrey, Richard J

    2015-11-01

    Aerodynamic performance and agility during flapping flight are determined by the combination of wing shape and kinematics. The degree of morphological and kinematic optimisation is unknown and depends upon a large parameter space. Aimed at providing an accurate and computationally inexpensive modelling tool for flapping-wing aerodynamics, we propose a novel CFD (computational fluid dynamics)-informed quasi-steady model (CIQSM), which assumes that the aerodynamic forces on a flapping wing can be decomposed into the quasi-steady forces and parameterised based on CFD results. Using least-squares fitting, we determine a set of proportional coefficients for the quasi-steady model relating wing kinematics to instantaneous aerodynamic force and torque; we calculate power with the product of quasi-steady torques and angular velocity. With the quasi-steady model fully and independently parameterised on the basis of high-fidelity CFD modelling, it is capable of predicting flapping-wing aerodynamic forces and power more accurately than the conventional blade element model (BEM) does. The improvement can be attributed to, for instance, taking into account the effects of the induced downwash and the wing tip vortex on the force generation and power consumption. Our model is validated by comparing the aerodynamics of a CFD model and the present quasi-steady model using the example case of a hovering hawkmoth. It demonstrates that the CIQSM outperforms the conventional BEM while remaining computationally cheap, and hence can be an effective tool for revealing the mechanisms of optimization and control of kinematics and morphology in flapping-wing flight for both bio-flyers and unmanned air systems.

  1. A CFD-informed quasi-steady model of flapping wing aerodynamics

    PubMed Central

    Nakata, Toshiyuki; Liu, Hao; Bomphrey, Richard J.

    2016-01-01

    Aerodynamic performance and agility during flapping flight are determined by the combination of wing shape and kinematics. The degree of morphological and kinematic optimisation is unknown and depends upon a large parameter space. Aimed at providing an accurate and computationally inexpensive modelling tool for flapping-wing aerodynamics, we propose a novel CFD (computational fluid dynamics)-informed quasi-steady model (CIQSM), which assumes that the aerodynamic forces on a flapping wing can be decomposed into the quasi-steady forces and parameterised based on CFD results. Using least-squares fitting, we determine a set of proportional coefficients for the quasi-steady model relating wing kinematics to instantaneous aerodynamic force and torque; we calculate power with the product of quasi-steady torques and angular velocity. With the quasi-steady model fully and independently parameterised on the basis of high-fidelity CFD modelling, it is capable of predicting flapping-wing aerodynamic forces and power more accurately than the conventional blade element model (BEM) does. The improvement can be attributed to, for instance, taking into account the effects of the induced downwash and the wing tip vortex on the force generation and power consumption. Our model is validated by comparing the aerodynamics of a CFD model and the present quasi-steady model using the example case of a hovering hawkmoth. It demonstrates that the CIQSM outperforms the conventional BEM while remaining computationally cheap, and hence can be an effective tool for revealing the mechanisms of optimization and control of kinematics and morphology in flapping-wing flight for both bio-flyers and unmanned air systems. PMID:27346891

  2. Waveform distortion by 2-step modeling ground vibration from trains

    NASA Astrophysics Data System (ADS)

    Wang, F.; Chen, W.; Zhang, J.; Li, F.; Liu, H.; Chen, X.; Pan, Y.; Li, G.; Xiao, F.

    2017-10-01

    The 2-step procedure is widely used in numerical research on ground vibrations from trains. The ground is inconsistently represented by a simplified model in the first step and by a refined model in the second step, which may lead to distortions in the simulation results. In order to reveal this modeling error, time histories of ground-borne vibrations were computed based on the 2-step procedure and then compared with the results from a benchmark procedure of the whole system. All parameters involved were intentionally set as equal for the 2 methods, which ensures that differences in the results originated from the inconsistencies of the ground model. Excited by wheel loads of low speeds such as 60 km/h and low frequencies less than 8 Hz, the computed responses of the subgrade were quite close to the benchmarks. However, notable distortions were found in all loading cases at higher frequencies. Moreover, significant underestimation of intensity occurred when load frequencies equaled 16 Hz. This occurred not only at the subgrade but also at the points 10 m and 20 m away from the track. When the load speed was increased to 350 km/h, all computed waveforms were distorted, including the responses to the loads at very low frequencies. The modeling error found herein suggests that the ground models in the 2 steps should be calibrated in terms of frequency bands to be investigated, and the speed of train should be taken into account at the same time.

  3. Cellular intelligence: Microphenomenology and the realities of being.

    PubMed

    Ford, Brian J

    2017-12-01

    Traditions of Eastern thought conceptualised life in a holistic sense, emphasising the processes of maintaining health and conquering sickness as manifestations of an essentially spiritual principle that was of overriding importance in the conduct of living. Western science, which drove the overriding and partial eclipse of Eastern traditions, became founded on a reductionist quest for ultimate realities which, in the modern scientific world, has embraced the notion that every living process can be successfully modelled by a digital computer system. It is argued here that the essential processes of cognition, response and decision-making inherent in living cells transcend conventional modelling, and microscopic studies of organisms like the shell-building amoebae and the rhodophyte alga Antithamnion reveal a level of cellular intelligence that is unrecognized by science and is not amenable to computer analysis. Copyright © 2017. Published by Elsevier Ltd.

  4. Low-order modeling of internal heat transfer in biomass particle pyrolysis

    DOE PAGES

    Wiggins, Gavin M.; Daw, C. Stuart; Ciesielski, Peter N.

    2016-05-11

    We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. Here, we conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulatemore » biomass particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less

  5. Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart

    2016-06-16

    We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less

  6. Computational Investigation of Amine–Oxygen Exciplex Formation

    PubMed Central

    Haupert, Levi M.; Simpson, Garth J.; Slipchenko, Lyudmila V.

    2012-01-01

    It has been suggested that fluorescence from amine-containing dendrimer compounds could be the result of a charge transfer between amine groups and molecular oxygen [Chu, C.-C.; Imae, T. Macromol. Rapid Commun. 2009, 30, 89.]. In this paper we employ equation-of-motion coupled cluster computational methods to study the electronic structure of an ammonia–oxygen model complex to examine this possibility. The results reveal several bound electronic states with charge transfer character with emission energies generally consistent with previous observations. However, further work involving confinement, solvent, and amine structure effects will be necessary for more rigorous examination of the charge transfer fluorescence hypothesis. PMID:21812447

  7. Some special solutions to the Hyperbolic NLS equation

    NASA Astrophysics Data System (ADS)

    Vuillon, Laurent; Dutykh, Denys; Fedele, Francesco

    2018-04-01

    The Hyperbolic Nonlinear SCHRöDINGER equation (HypNLS) arises as a model for the dynamics of three-dimensional narrow-band deep water gravity waves. In this study, the symmetries and conservation laws of this equation are computed. The PETVIASHVILI method is then exploited to numerically compute bi-periodic time-harmonic solutions of the HypNLS equation. In physical space they represent non-localized standing waves. Non-trivial spatial patterns are revealed and an attempt is made to describe them using symbolic dynamics and the language of substitutions. Finally, the dynamics of a slightly perturbed standing wave is numerically investigated by means a highly accurate FOURIER solver.

  8. Oxidation mechanism of formic acid on the bismuth adatom-modified Pt(111) surface.

    PubMed

    Perales-Rondón, Juan Victor; Ferre-Vilaplana, Adolfo; Feliu, Juan M; Herrero, Enrique

    2014-09-24

    In order to improve catalytic processes, elucidation of reaction mechanisms is essential. Here, supported by a combination of experimental and computational results, the oxidation mechanism of formic acid on Pt(111) electrodes modified by the incorporation of bismuth adatoms is revealed. In the proposed model, formic acid is first physisorbed on bismuth and then deprotonated and chemisorbed in formate form, also on bismuth, from which configuration the C-H bond is cleaved, on a neighbor Pt site, yielding CO2. It was found computationally that the activation energy for the C-H bond cleavage step is negligible, which was also verified experimentally.

  9. Adapting to the surface: A comparison of handwriting measures when writing on a tablet computer and on paper.

    PubMed

    Gerth, Sabrina; Dolk, Thomas; Klassert, Annegret; Fliesser, Michael; Fischer, Martin H; Nottbusch, Guido; Festman, Julia

    2016-08-01

    Our study addresses the following research questions: Are there differences between handwriting movements on paper and on a tablet computer? Can experienced writers, such as most adults, adapt their graphomotor execution during writing to a rather unfamiliar surface for instance a tablet computer? We examined the handwriting performance of adults in three tasks with different complexity: (a) graphomotor abilities, (b) visuomotor abilities and (c) handwriting. Each participant performed each task twice, once on paper and once on a tablet computer with a pen. We tested 25 participants by measuring their writing duration, in air time, number of pen lifts, writing velocity and number of inversions in velocity. The data were analyzed using linear mixed-effects modeling with repeated measures. Our results reveal differences between writing on paper and on a tablet computer which were partly task-dependent. Our findings also show that participants were able to adapt their graphomotor execution to the smoother surface of the tablet computer during the tasks. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Dst Index in the 2008 GEM Modeling Challenge - Model Performance for Moderate and Strong Magnetic Storms

    NASA Technical Reports Server (NTRS)

    Rastaetter, Lutz; Kuznetsova, Maria; Hesse, Michael; Chulaki, Anna; Pulkkinen, Antti; Ridley, Aaron J.; Gombosi, Tamas; Vapirev, Alexander; Raeder, Joachim; Wiltberger, Michael James; hide

    2010-01-01

    The GEM 2008 modeling challenge efforts are expanding beyond comparing in-situ measurements in the magnetosphere and ionosphere to include the computation of indices to be compared. The Dst index measures the largest deviations of the horizontal magnetic field at 4 equatorial magnetometers from the quiet-time background field and is commonly used to track the strength of the magnetic disturbance of the magnetosphere during storms. Models can calculate a proxy Dst index in various ways, including using the Dessler-Parker Sckopke relation and the energy of the ring current and Biot-Savart integration of electric currents in the magnetosphere. The GEM modeling challenge investigates 4 space weather events and we compare models available at CCMC against each other and the observed values of Ost. Models used include SWMF/BATSRUS, OpenGGCM, LFM, GUMICS (3D magnetosphere MHD models), Fok-RC, CRCM, RAM-SCB (kinetic drift models of the ring current), WINDMI (magnetosphere-ionosphere electric circuit model), and predictions based on an impulse response function (IRF) model and analytic coupling functions with inputs of solar wind data. In addition to the analysis of model-observation comparisons we look at the way Dst is computed in global magnetosphere models. The default value of Dst computed by the SWMF model is for Bz the Earth's center. In addition to this, we present results obtained at different locations on the Earth's surface. We choose equatorial locations at local noon, dusk (18:00 hours), midnight and dawn (6:00 hours). The different virtual observatory locations reveal the variation around the earth-centered Dst value resulting from the distribution of electric currents in the magnetosphere during different phases of a storm.

  11. A fractional calculus perspective of distributed propeller design

    NASA Astrophysics Data System (ADS)

    Tenreiro Machado, J.; Galhano, Alexandra M.

    2018-02-01

    A new generation of aircraft with distributed propellers leads to operational performances superior to those exhibited by standard designs. Computational simulations and experimental tests show a reduction of fuel consumption and noise. This paper proposes an analogy between aerodynamics and electrical circuits. The model reveals properties similar to those of fractional-order systems and gives a deeper insight into the dynamics of multi-propeller coupling.

  12. Site-specific strong ground motion prediction using 2.5-D modelling

    NASA Astrophysics Data System (ADS)

    Narayan, J. P.

    2001-08-01

    An algorithm was developed using the 2.5-D elastodynamic wave equation, based on the displacement-stress relation. One of the most significant advantages of the 2.5-D simulation is that the 3-D radiation pattern can be generated using double-couple point shear-dislocation sources in the 2-D numerical grid. A parsimonious staggered grid scheme was adopted instead of the standard staggered grid scheme, since this is the only scheme suitable for computing the dislocation. This new 2.5-D numerical modelling avoids the extensive computational cost of 3-D modelling. The significance of this exercise is that it makes it possible to simulate the strong ground motion (SGM), taking into account the energy released, 3-D radiation pattern, path effects and local site conditions at any location around the epicentre. The slowness vector (py) was used in the supersonic region for each layer, so that all the components of the inertia coefficient are positive. The double-couple point shear-dislocation source was implemented in the numerical grid using the moment tensor components as the body-force couples. The moment per unit volume was used in both the 3-D and 2.5-D modelling. A good agreement in the 3-D and 2.5-D responses for different grid sizes was obtained when the moment per unit volume was further reduced by a factor equal to the finite-difference grid size in the case of the 2.5-D modelling. The components of the radiation pattern were computed in the xz-plane using 3-D and 2.5-D algorithms for various focal mechanisms, and the results were in good agreement. A comparative study of the amplitude behaviour of the 3-D and 2.5-D wavefronts in a layered medium reveals the spatial and temporal damped nature of the 2.5-D elastodynamic wave equation. 3-D and 2.5-D simulated responses at a site using a different strike direction reveal that strong ground motion (SGM) can be predicted just by rotating the strike of the fault counter-clockwise by the same amount as the azimuth of the site with respect to the epicentre. This adjustment is necessary since the response is computed keeping the epicentre, focus and the desired site in the same xz-plane, with the x-axis pointing in the north direction.

  13. Evaluating the hydraulic and transport properties of peat soil using pore network modeling and X-ray micro computed tomography

    NASA Astrophysics Data System (ADS)

    Gharedaghloo, Behrad; Price, Jonathan S.; Rezanezhad, Fereidoun; Quinton, William L.

    2018-06-01

    Micro-scale properties of peat pore space and their influence on hydraulic and transport properties of peat soils have been given little attention so far. Characterizing the variation of these properties in a peat profile can increase our knowledge on the processes controlling contaminant transport through peatlands. As opposed to the common macro-scale (or bulk) representation of groundwater flow and transport processes, a pore network model (PNM) simulates flow and transport processes within individual pores. Here, a pore network modeling code capable of simulating advective and diffusive transport processes through a 3D unstructured pore network was developed; its predictive performance was evaluated by comparing its results to empirical values and to the results of computational fluid dynamics (CFD) simulations. This is the first time that peat pore networks have been extracted from X-ray micro-computed tomography (μCT) images of peat deposits and peat pore characteristics evaluated in a 3D approach. Water flow and solute transport were modeled in the unstructured pore networks mapped directly from μCT images. The modeling results were processed to determine the bulk properties of peat deposits. Results portray the commonly observed decrease in hydraulic conductivity with depth, which was attributed to the reduction of pore radius and increase in pore tortuosity. The increase in pore tortuosity with depth was associated with more decomposed peat soil and decreasing pore coordination number with depth, which extended the flow path of fluid particles. Results also revealed that hydraulic conductivity is isotropic locally, but becomes anisotropic after upscaling to core-scale; this suggests the anisotropy of peat hydraulic conductivity observed in core-scale and field-scale is due to the strong heterogeneity in the vertical dimension that is imposed by the layered structure of peat soils. Transport simulations revealed that for a given solute, the effective diffusion coefficient decreases with depth due to the corresponding increase of diffusional tortuosity. Longitudinal dispersivity of peat also was computed by analyzing advective-dominant transport simulations that showed peat dispersivity is similar to the empirical values reported in the same peat soil; it is not sensitive to soil depth and does not vary much along the soil profile.

  14. Pulling smarties out of a bag: a Headed Records analysis of children's recall of their own past beliefs.

    PubMed

    Barreau, S; Morton, J

    1999-11-09

    The work reported provides an information processing account of young children's performance on the Smarties task (Perner, J., Leekam, S.R., & Wimmer, H. 1987, Three-year-olds' difficulty with false belief: the case for a conceptual deficit. British Journal of Developmental Psychology, 5, 125-137). In this task, a 3-year-old is shown a Smarties tube and asked about the supposed contents. The true contents, pencils, is then revealed, and the majority of 3-year-olds cannot recall their initial belief that the tube contained Smarties. The theoretical analysis, based on the Headed Records framework (Morton, J., Hammersley, R.J., & Bekerian, D.A. 1985, Headed records: a model for memory and its failures, Cognition, 20, 1-23), focuses on the computational conditions that are required to resolve the Smarties task; on the possible limitations in the developing memory system that may lead to a computational breakdown; and on ways of bypassing such limitations to ensure correct resolution. The design, motivated by this analysis, is a variation on Perner's Smarties task. Instead of revealing the tube's contents immediately after establishing the child's beliefs about it, these contents were then transferred to a bag and a (false) belief about the bag's contents established. Only then were the true contents of the bag revealed. The same procedure (different contents) was carried out a week later. As predicted children's performance was better (a) in the 'tube' condition; and (b) on the second test. Consistent with the proposed analysis, the data show that when the computational demands imposed by the original task are reduced, young children can and do remember what they had thought about the contents of the tube even after its true contents are revealed.

  15. A computational analysis of the long-term regulation of arterial pressure.

    PubMed

    Beard, Daniel A; Pettersen, Klas H; Carlson, Brian E; Omholt, Stig W; Bugenhagen, Scott M

    2013-01-01

    The asserted dominant role of the kidneys in the chronic regulation of blood pressure and in the etiology of hypertension has been debated since the 1970s. At the center of the theory is the observation that the acute relationships between arterial pressure and urine production-the acute pressure-diuresis and pressure-natriuresis curves-physiologically adapt to perturbations in pressure and/or changes in the rate of salt and volume intake. These adaptations, modulated by various interacting neurohumoral mechanisms, result in chronic relationships between water and salt excretion and pressure that are much steeper than the acute relationships. While the view that renal function is the dominant controller of arterial pressure has been supported by computer models of the cardiovascular system known as the "Guyton-Coleman model", no unambiguous description of a computer model capturing chronic adaptation of acute renal function in blood pressure control has been presented. Here, such a model is developed with the goals of: 1. representing the relevant mechanisms in an identifiable mathematical model; 2. identifying model parameters using appropriate data; 3. validating model predictions in comparison to data; and 4. probing hypotheses regarding the long-term control of arterial pressure and the etiology of primary hypertension. The developed model reveals: long-term control of arterial blood pressure is primarily through the baroreflex arc and the renin-angiotensin system; and arterial stiffening provides a sufficient explanation for the etiology of primary hypertension associated with ageing. Furthermore, the model provides the first consistent explanation of the physiological response to chronic stimulation of the baroreflex.

  16. Domain-averaged snow depth over complex terrain from flat field measurements

    NASA Astrophysics Data System (ADS)

    Helbig, Nora; van Herwijnen, Alec

    2017-04-01

    Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.

  17. Quality Saving Mechanisms of Mitochondria during Aging in a Fully Time-Dependent Computational Biophysical Model

    PubMed Central

    Mellem, Daniel; Fischer, Frank; Jaspers, Sören; Wenck, Horst; Rübhausen, Michael

    2016-01-01

    Mitochondria are essential for the energy production of eukaryotic cells. During aging mitochondria run through various processes which change their quality in terms of activity, health and metabolic supply. In recent years, many of these processes such as fission and fusion of mitochondria, mitophagy, mitochondrial biogenesis and energy consumption have been subject of research. Based on numerous experimental insights, it was possible to qualify mitochondrial behaviour in computational simulations. Here, we present a new biophysical model based on the approach of Figge et al. in 2012. We introduce exponential decay and growth laws for each mitochondrial process to derive its time-dependent probability during the aging of cells. All mitochondrial processes of the original model are mathematically and biophysically redefined and additional processes are implemented: Mitochondrial fission and fusion is separated into a metabolic outer-membrane part and a protein-related inner-membrane part, a quality-dependent threshold for mitophagy and mitochondrial biogenesis is introduced and processes for activity-dependent internal oxidative stress as well as mitochondrial repair mechanisms are newly included. Our findings reveal a decrease of mitochondrial quality and a fragmentation of the mitochondrial network during aging. Additionally, the model discloses a quality increasing mechanism due to the interplay of the mitophagy and biogenesis cycle and the fission and fusion cycle of mitochondria. It is revealed that decreased mitochondrial repair can be a quality saving process in aged cells. Furthermore, the model finds strategies to sustain the quality of the mitochondrial network in cells with high production rates of reactive oxygen species due to large energy demands. Hence, the model adds new insights to biophysical mechanisms of mitochondrial aging and provides novel understandings of the interdependency of mitochondrial processes. PMID:26771181

  18. Translating natural genetic variation to gene expression in a computational model of the Drosophila gap gene regulatory network

    PubMed Central

    Kozlov, Konstantin N.; Kulakovskiy, Ivan V.; Zubair, Asif; Marjoram, Paul; Lawrie, David S.; Nuzhdin, Sergey V.; Samsonova, Maria G.

    2017-01-01

    Annotating the genotype-phenotype relationship, and developing a proper quantitative description of the relationship, requires understanding the impact of natural genomic variation on gene expression. We apply a sequence-level model of gap gene expression in the early development of Drosophila to analyze single nucleotide polymorphisms (SNPs) in a panel of natural sequenced D. melanogaster lines. Using a thermodynamic modeling framework, we provide both analytical and computational descriptions of how single-nucleotide variants affect gene expression. The analysis reveals that the sequence variants increase (decrease) gene expression if located within binding sites of repressors (activators). We show that the sign of SNP influence (activation or repression) may change in time and space and elucidate the origin of this change in specific examples. The thermodynamic modeling approach predicts non-local and non-linear effects arising from SNPs, and combinations of SNPs, in individual fly genotypes. Simulation of individual fly genotypes using our model reveals that this non-linearity reduces to almost additive inputs from multiple SNPs. Further, we see signatures of the action of purifying selection in the gap gene regulatory regions. To infer the specific targets of purifying selection, we analyze the patterns of polymorphism in the data at two phenotypic levels: the strengths of binding and expression. We find that combinations of SNPs show evidence of being under selective pressure, while individual SNPs do not. The model predicts that SNPs appear to accumulate in the genotypes of the natural population in a way biased towards small increases in activating action on the expression pattern. Taken together, these results provide a systems-level view of how genetic variation translates to the level of gene regulatory networks via combinatorial SNP effects. PMID:28898266

  19. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  20. Computational Fluid Dynamics Modeling of Symptomatic Intracranial Atherosclerosis May Predict Risk of Stroke Recurrence

    PubMed Central

    Leng, Xinyi; Scalzo, Fabien; Ip, Hing Lung; Johnson, Mark; Fong, Albert K.; Fan, Florence S. Y.; Chen, Xiangyan; Soo, Yannie O. Y.; Miao, Zhongrong; Liu, Liping; Feldmann, Edward; Leung, Thomas W. H.; Liebeskind, David S.; Wong, Ka Sing

    2014-01-01

    Background Patients with symptomatic intracranial atherosclerosis (ICAS) of ≥70% luminal stenosis are at high risk of stroke recurrence. We aimed to evaluate the relationships between hemodynamics of ICAS revealed by computational fluid dynamics (CFD) models and risk of stroke recurrence in this patient subset. Methods Patients with a symptomatic ICAS lesion of 70–99% luminal stenosis were screened and enrolled in this study. CFD models were reconstructed based on baseline computed tomographic angiography (CTA) source images, to reveal hemodynamics of the qualifying symptomatic ICAS lesions. Change of pressures across a lesion was represented by the ratio of post- and pre-stenotic pressures. Change of shear strain rates (SSR) across a lesion was represented by the ratio of SSRs at the stenotic throat and proximal normal vessel segment, similar for the change of flow velocities. Patients were followed up for 1 year. Results Overall, 32 patients (median age 65; 59.4% males) were recruited. The median pressure, SSR and velocity ratios for the ICAS lesions were 0.40 (−2.46–0.79), 4.5 (2.2–20.6), and 7.4 (5.2–12.5), respectively. SSR ratio (hazard ratio [HR] 1.027; 95% confidence interval [CI], 1.004–1.051; P = 0.023) and velocity ratio (HR 1.029; 95% CI, 1.002–1.056; P = 0.035) were significantly related to recurrent territorial ischemic stroke within 1 year by univariate Cox regression, respectively with the c-statistics of 0.776 (95% CI, 0.594–0.903; P = 0.014) and 0.776 (95% CI, 0.594–0.903; P = 0.002) in receiver operating characteristic analysis. Conclusions Hemodynamics of ICAS on CFD models reconstructed from routinely obtained CTA images may predict subsequent stroke recurrence in patients with a symptomatic ICAS lesion of 70–99% luminal stenosis. PMID:24818753

  1. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522

  2. Computational Methods for Nonlinear Dynamic Problems in Solid and Structural Mechanics: Progress in the Theory and Modeling of Friction and in the Control of Dynamical Systems with Frictional Forces

    DTIC Science & Technology

    1989-03-31

    present several numerical studies designed to reveal the effect that some of the governing parameters have on the behavior of the system and, whenever...Friction and in the Control of Dynamical Systems with Frictional Forces FINAL TECHNICAL REPORT March 31, 1989 _ -- I -.7: .-.- - : AFOSR Contract F49620...SOLID AND STRUCTURAL MECHANICS: Progress in the Theory and Modeling of Friction and in the Control of Dynamical Systems with Frictional Forces I I * FINAL

  3. Validation of the Transient Structural Response of a Threaded Assembly: Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott W.; Hemez, Francois M.; Robertson, Amy N.

    2004-04-01

    This report explores the application of model validation techniques in structural dynamics. The problem of interest is the propagation of an explosive-driven mechanical shock through a complex threaded joint. The study serves the purpose of assessing whether validating a large-size computational model is feasible, which unit experiments are required, and where the main sources of uncertainty reside. The results documented here are preliminary, and the analyses are exploratory in nature. The results obtained to date reveal several deficiencies of the analysis, to be rectified in future work.

  4. Quadratic integrand double-hybrid made spin-component-scaled

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brémond, Éric, E-mail: eric.bremond@iit.it; Savarese, Marika; Sancho-García, Juan C.

    2016-03-28

    We propose two analytical expressions aiming to rationalize the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) schemes for double-hybrid exchange-correlation density-functionals. Their performances are extensively tested within the framework of the nonempirical quadratic integrand double-hybrid (QIDH) model on energetic properties included into the very large GMTKN30 benchmark database, and on structural properties of semirigid medium-sized organic compounds. The SOS variant is revealed as a less computationally demanding alternative to reach the accuracy of the original QIDH model without losing any theoretical background.

  5. Machine Learning Prediction of the Energy Gap of Graphene Nanoflakes Using Topological Autocorrelation Vectors.

    PubMed

    Fernandez, Michael; Abreu, Jose I; Shi, Hongqing; Barnard, Amanda S

    2016-11-14

    The possibility of band gap engineering in graphene opens countless new opportunities for application in nanoelectronics. In this work, the energy gaps of 622 computationally optimized graphene nanoflakes were mapped to topological autocorrelation vectors using machine learning techniques. Machine learning modeling revealed that the most relevant correlations appear at topological distances in the range of 1 to 42 with prediction accuracy higher than 80%. The data-driven model can statistically discriminate between graphene nanoflakes with different energy gaps on the basis of their molecular topology.

  6. Modelling short channel mosfets for use in VLSI

    NASA Technical Reports Server (NTRS)

    Klafter, Alex; Pilorz, Stuart; Polosa, Rosa Loguercio; Ruddock, Guy; Smith, Andrew

    1986-01-01

    In an investigation of metal oxide semiconductor field effect transistor (MOFSET) devices, a one-dimensional mathematical model of device dynamics was prepared, from which an accurate and computationally efficient drain current expression could be derived for subsequent parameter extraction. While a critical review revealed weaknesses in existing 1-D models (Pao-Sah, Pierret-Shields, Brews, and Van de Wiele), this new model in contrast was found to allow all the charge distributions to be continuous, to retain the inversion layer structure, and to include the contribution of current from the pinched-off part of the device. The model allows the source and drain to operate in different regimes. Numerical algorithms used for the evaluation of surface potentials in the various models are presented.

  7. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    PubMed Central

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; Ebrahim, Ali; Saunders, Michael A.; Palsson, Bernhard O.

    2016-01-01

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thus represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. This flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models. PMID:27857205

  8. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    DOE PAGES

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; ...

    2016-11-18

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less

  9. Numerical simulation of magmatic hydrothermal systems

    USGS Publications Warehouse

    Ingebritsen, S.E.; Geiger, S.; Hurwitz, S.; Driesner, T.

    2010-01-01

    The dynamic behavior of magmatic hydrothermal systems entails coupled and nonlinear multiphase flow, heat and solute transport, and deformation in highly heterogeneous media. Thus, quantitative analysis of these systems depends mainly on numerical solution of coupled partial differential equations and complementary equations of state (EOS). The past 2 decades have seen steady growth of computational power and the development of numerical models that have eliminated or minimized the need for various simplifying assumptions. Considerable heuristic insight has been gained from process-oriented numerical modeling. Recent modeling efforts employing relatively complete EOS and accurate transport calculations have revealed dynamic behavior that was damped by linearized, less accurate models, including fluid property control of hydrothermal plume temperatures and three-dimensional geometries. Other recent modeling results have further elucidated the controlling role of permeability structure and revealed the potential for significant hydrothermally driven deformation. Key areas for future reSearch include incorporation of accurate EOS for the complete H2O-NaCl-CO2 system, more realistic treatment of material heterogeneity in space and time, realistic description of large-scale relative permeability behavior, and intercode benchmarking comparisons. Copyright 2010 by the American Geophysical Union.

  10. Numerical evidences of universal trap-like aging dynamics

    NASA Astrophysics Data System (ADS)

    Cammarota, Chiara; Marinari, Enzo

    2018-04-01

    Trap models have been initially proposed as toy models for dynamical relaxation in extremely simplified rough potential energy landscapes. Their importance has recently grown considerably thanks to the discovery that the trap-like aging mechanism directly controls the out-of-equilibrium relaxation processes of more sophisticated spin models, that are considered as the solvable counterpart of real disordered systems. Further establishing the connection between these spin models, out-of-equilibrium behavior and the trap like aging mechanism could shed new light on the properties, which are still largely mysterious, for the activated out-of-equilibrium dynamics of disordered systems. In this work we discuss numerical evidence based on the computations of the permanence times of an emergent trap-like aging behavior in a variety of very simple disordered models—developed from the trap model paradigm. Our numerical results are backed by analytic derivations and heuristic discussions. Such exploration reveals some of the tricks needed to reveal the trap behavior in spite of the occurrence of secondary processes, of the existence of dynamical correlations and of strong finite system’s size effects.

  11. Risk Assessment of Alzheimer's Disease using the Information Diffusion Model from Structural Magnetic Resonance Imaging.

    PubMed

    Beheshti, Iman; Olya, Hossain G T; Demirel, Hasan

    2016-04-05

    Recently, automatic risk assessment methods have been a target for the detection of Alzheimer's disease (AD) risk. This study aims to develop an automatic computer-aided AD diagnosis technique for risk assessment of AD using information diffusion theory. Information diffusion is a fuzzy mathematics logic of set-value that is used for risk assessment of natural phenomena, which attaches fuzziness (uncertainty) and incompleteness. Data were obtained from voxel-based morphometry analysis of structural magnetic resonance imaging. The information diffusion model results revealed that the risk of AD increases with a reduction of the normalized gray matter ratio (p > 0.5, normalized gray matter ratio <40%). The information diffusion model results were evaluated by calculation of the correlation of two traditional risk assessments of AD, the Mini-Mental State Examination and the Clinical Dementia Rating. The correlation results revealed that the information diffusion model findings were in line with Mini-Mental State Examination and Clinical Dementia Rating results. Application of information diffusion model contributes to the computerization of risk assessment of AD, which has a practical implication for the early detection of AD.

  12. Single-Photon Emission Computed Tomography/Computed Tomography Imaging in a Rabbit Model of Emphysema Reveals Ongoing Apoptosis In Vivo

    PubMed Central

    Goldklang, Monica P.; Tekabe, Yared; Zelonina, Tina; Trischler, Jordis; Xiao, Rui; Stearns, Kyle; Romanov, Alexander; Muzio, Valeria; Shiomi, Takayuki; Johnson, Lynne L.

    2016-01-01

    Evaluation of lung disease is limited by the inability to visualize ongoing pathological processes. Molecular imaging that targets cellular processes related to disease pathogenesis has the potential to assess disease activity over time to allow intervention before lung destruction. Because apoptosis is a critical component of lung damage in emphysema, a functional imaging approach was taken to determine if targeting apoptosis in a smoke exposure model would allow the quantification of early lung damage in vivo. Rabbits were exposed to cigarette smoke for 4 or 16 weeks and underwent single-photon emission computed tomography/computed tomography scanning using technetium-99m–rhAnnexin V-128. Imaging results were correlated with ex vivo tissue analysis to validate the presence of lung destruction and apoptosis. Lung computed tomography scans of long-term smoke–exposed rabbits exhibit anatomical similarities to human emphysema, with increased lung volumes compared with controls. Morphometry on lung tissue confirmed increased mean linear intercept and destructive index at 16 weeks of smoke exposure and compliance measurements documented physiological changes of emphysema. Tissue and lavage analysis displayed the hallmarks of smoke exposure, including increased tissue cellularity and protease activity. Technetium-99m–rhAnnexin V-128 single-photon emission computed tomography signal was increased after smoke exposure at 4 and 16 weeks, with confirmation of increased apoptosis through terminal deoxynucleotidyl transferase dUTP nick end labeling staining and increased tissue neutral sphingomyelinase activity in the tissue. These studies not only describe a novel emphysema model for use with future therapeutic applications, but, most importantly, also characterize a promising imaging modality that identifies ongoing destructive cellular processes within the lung. PMID:27483341

  13. Exploring similarities among many species distributions

    USGS Publications Warehouse

    Simmerman, Scott; Wang, Jingyuan; Osborne, James; Shook, Kimberly; Huang, Jian; Godsoe, William; Simons, Theodore R.

    2012-01-01

    Collecting species presence data and then building models to predict species distribution has been long practiced in the field of ecology for the purpose of improving our understanding of species relationships with each other and with the environment. Due to limitations of computing power as well as limited means of using modeling software on HPC facilities, past species distribution studies have been unable to fully explore diverse data sets. We build a system that can, for the first time to our knowledge, leverage HPC to support effective exploration of species similarities in distribution as well as their dependencies on common environmental conditions. Our system can also compute and reveal uncertainties in the modeling results enabling domain experts to make informed judgments about the data. Our work was motivated by and centered around data collection efforts within the Great Smoky Mountains National Park that date back to the 1940s. Our findings present new research opportunities in ecology and produce actionable field-work items for biodiversity management personnel to include in their planning of daily management activities.

  14. Advanced Modeling in Excel: from Water Jets to Big Bang

    NASA Astrophysics Data System (ADS)

    Ignatova, Olga; Chyzhyk, D.; Willis, C.; Kazachkov, A.

    2006-12-01

    An international students’ project is presented focused on application of Open Office and Excel spreadsheets for modeling of projectile-motion type dynamical systems. Variation of the parameters of plotted and animated families of jets flowing at different angles out of the holes in the wall of water-filled reservoir [1,2] revealed unexpected peculiarities of the envelopes, vertices, intersections and landing points of virtual trajectories. Comparison with real-life systems and rigorous calculations were performed to prove predictions of computer experiments. By same technique, the kinematics of fireworks was analyzed. On this basis two-dimensional ‘firework’ computer model of Big Bang was designed and studied, its relevance and limitations checked. 1.R.Ehrlich, Turning the World Inside Out, (Princeton University Press, Princeton, NJ, 1990), pp. 98-100. 2.A.Kazachkov, Yu.Bogdan, N.Makarovsky, N.Nedbailo. A Bucketful of Physics, in R.Pinto, S.Surinach (eds), International Conference Physics Teacher Education Beyond 2000. Selected Contributions (Elsevier Editions, Paris, 2001), pp.563-564. Sponsored by Courtney Willis.

  15. Promoter-enhancer interactions identified from Hi-C data using probabilistic models and hierarchical topological domains.

    PubMed

    Ron, Gil; Globerson, Yuval; Moran, Dror; Kaplan, Tommy

    2017-12-21

    Proximity-ligation methods such as Hi-C allow us to map physical DNA-DNA interactions along the genome, and reveal its organization into topologically associating domains (TADs). As the Hi-C data accumulate, computational methods were developed for identifying domain borders in multiple cell types and organisms. Here, we present PSYCHIC, a computational approach for analyzing Hi-C data and identifying promoter-enhancer interactions. We use a unified probabilistic model to segment the genome into domains, which we then merge hierarchically and fit using a local background model, allowing us to identify over-represented DNA-DNA interactions across the genome. By analyzing the published Hi-C data sets in human and mouse, we identify hundreds of thousands of putative enhancers and their target genes, and compile an extensive genome-wide catalog of gene regulation in human and mouse. As we show, our predictions are highly enriched for ChIP-seq and DNA accessibility data, evolutionary conservation, eQTLs and other DNA-DNA interaction data.

  16. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  17. A mass weighted chemical elastic network model elucidates closed form domain motions in proteins

    PubMed Central

    Kim, Min Hyeok; Seo, Sangjae; Jeong, Jay Il; Kim, Bum Joon; Liu, Wing Kam; Lim, Byeong Soo; Choi, Jae Boong; Kim, Moon Ki

    2013-01-01

    An elastic network model (ENM), usually Cα coarse-grained one, has been widely used to study protein dynamics as an alternative to classical molecular dynamics simulation. This simple approach dramatically saves the computational cost, but sometimes fails to describe a feasible conformational change due to unrealistically excessive spring connections. To overcome this limitation, we propose a mass-weighted chemical elastic network model (MWCENM) in which the total mass of each residue is assumed to be concentrated on the representative alpha carbon atom and various stiffness values are precisely assigned according to the types of chemical interactions. We test MWCENM on several well-known proteins of which both closed and open conformations are available as well as three α-helix rich proteins. Their normal mode analysis reveals that MWCENM not only generates more plausible conformational changes, especially for closed forms of proteins, but also preserves protein secondary structures thus distinguishing MWCENM from traditional ENMs. In addition, MWCENM also reduces computational burden by using a more sparse stiffness matrix. PMID:23456820

  18. Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes

    NASA Astrophysics Data System (ADS)

    Tsoy, A. S.; Snegirev, A. Yu.

    2015-09-01

    The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.

  19. A stochastic reaction-diffusion model for protein aggregation on DNA

    NASA Astrophysics Data System (ADS)

    Voulgarakis, Nikolaos K.

    Vital functions of DNA, such as transcription and packaging, depend on the proper clustering of proteins on the double strand. The present study investigates how the interplay between DNA allostery and electrostatic interactions affects protein clustering. The statistical analysis of a simple but transparent computational model reveals two major consequences of this interplay. First, depending on the protein and salt concentration, protein filaments exhibit a bimodal DNA stiffening and softening behavior. Second, within a certain domain of the control parameters, electrostatic interactions can cause energetic frustration that forces proteins to assemble in rigid spiral configurations. Such spiral filaments might trigger both positive and negative supercoiling, which can ultimately promote gene compaction and regulate the promoter. It has been experimentally shown that bacterial histone-like proteins assemble in similar spiral patterns and/or exhibit the same bimodal behavior. The proposed model can, thus, provide computational insights into the physical mechanisms used by proteins to control the mechanical properties of the DNA.

  20. A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

    1992-01-01

    A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

  1. The Structure and Properties of Silica Glass Nanostructures using Novel Computational Systems

    NASA Astrophysics Data System (ADS)

    Doblack, Benjamin N.

    The structure and properties of silica glass nanostructures are examined using computational methods in this work. Standard synthesis methods of silica and its associated material properties are first discussed in brief. A review of prior experiments on this amorphous material is also presented. Background and methodology for the simulation of mechanical tests on amorphous bulk silica and nanostructures are later presented. A new computational system for the accurate and fast simulation of silica glass is also presented, using an appropriate interatomic potential for this material within the open-source molecular dynamics computer program LAMMPS. This alternative computational method uses modern graphics processors, Nvidia CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model select materials, this enhancement allows the addition of accelerated molecular dynamics simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal of this project is to investigate the structure and size dependent mechanical properties of silica glass nanohelical structures under tensile MD conditions using the innovative computational system. Specifically, silica nanoribbons and nanosprings are evaluated which revealed unique size dependent elastic moduli when compared to the bulk material. For the nanoribbons, the tensile behavior differed widely between the models simulated, with distinct characteristic extended elastic regions. In the case of the nanosprings simulated, more clear trends are observed. In particular, larger nanospring wire cross-sectional radii (r) lead to larger Young's moduli, while larger helical diameters (2R) resulted in smaller Young's moduli. Structural transformations and theoretical models are also analyzed to identify possible factors which might affect the mechanical response of silica nanostructures under tension. The work presented outlines an innovative simulation methodology, and discusses how results can be validated against prior experimental and simulation findings. The ultimate goal is to develop new computational methods for the study of nanostructures which will make the field of materials science more accessible, cost effective and efficient.

  2. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  3. A computational analysis of the long-term regulation of arterial pressure

    PubMed Central

    Beard, Daniel A.

    2013-01-01

    The asserted dominant role of the kidneys in the chronic regulation of blood pressure and in the etiology of hypertension has been debated since the 1970s. At the center of the theory is the observation that the acute relationships between arterial pressure and urine production—the acute pressure-diuresis and pressure-natriuresis curves—physiologically adapt to perturbations in pressure and/or changes in the rate of salt and volume intake. These adaptations, modulated by various interacting neurohumoral mechanisms, result in chronic relationships between water and salt excretion and pressure that are much steeper than the acute relationships. While the view that renal function is the dominant controller of arterial pressure has been supported by computer models of the cardiovascular system known as the “Guyton-Coleman model”, no unambiguous description of a computer model capturing chronic adaptation of acute renal function in blood pressure control has been presented. Here, such a model is developed with the goals of: 1. representing the relevant mechanisms in an identifiable mathematical model; 2. identifying model parameters using appropriate data; 3. validating model predictions in comparison to data; and 4. probing hypotheses regarding the long-term control of arterial pressure and the etiology of primary hypertension. The developed model reveals: long-term control of arterial blood pressure is primarily through the baroreflex arc and the renin-angiotensin system; and arterial stiffening provides a sufficient explanation for the etiology of primary hypertension associated with ageing. Furthermore, the model provides the first consistent explanation of the physiological response to chronic stimulation of the baroreflex. PMID:24555102

  4. Investigation of the relative effects of vascular branching structure and gravity on pulmonary arterial blood flow heterogeneity via an image-based computational model.

    PubMed

    Burrowes, Kelly S; Hunter, Peter J; Tawhai, Merryn H

    2005-11-01

    A computational model of blood flow through the human pulmonary arterial tree has been developed to investigate the relative influence of branching structure and gravity on blood flow distribution in the human lung. Geometric models of the largest arterial vessels and lobar boundaries were first derived using multidetector row x-ray computed tomography (MDCT) scans. Further accompanying arterial vessels were generated from the MDCT vessel endpoints into the lobar volumes using a volume-filling branching algorithm. Equations governing the conservation of mass and momentum were solved within the geometric model to calculate pressure, velocity, and vessel radius. Blood flow results in the anatomically based model, with and without gravity, and in a symmetric geometric model were compared to investigate their relative contributions to blood flow heterogeneity. Results showed a persistent blood flow gradient and flow heterogeneity in the absence of gravitational forces in the anatomically based model. Comparison with flow results in the symmetric model revealed that the asymmetric vascular branching structure was largely responsible for producing this heterogeneity. Analysis of average results in varying slice thicknesses illustrated a clear flow gradient because of gravity in "lower resolution" data (thicker slices), but on examination of higher resolution data, a trend was less obvious. Results suggest that although gravity does influence flow distribution, the influence of the tree branching structure is also a dominant factor. These results are consistent with high-resolution experimental studies that have demonstrated gravity to be only a minor determinant of blood flow distribution.

  5. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

  6. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE PAGES

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...

    2017-10-24

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  7. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  8. Topics in Modeling of Cochlear Dynamics: Computation, Response and Stability Analysis

    NASA Astrophysics Data System (ADS)

    Filo, Maurice G.

    This thesis touches upon several topics in cochlear modeling. Throughout the literature, mathematical models of the cochlea vary according to the degree of biological realism to be incorporated. This thesis casts the cochlear model as a continuous space-time dynamical system using operator language. This framework encompasses a wider class of cochlear models and makes the dynamics more transparent and easier to analyze before applying any numerical method to discretize space. In fact, several numerical methods are investigated to study the computational efficiency of the finite dimensional realizations in space. Furthermore, we study the effects of the active gain perturbations on the stability of the linearized dynamics. The stability analysis is used to explain possible mechanisms underlying spontaneous otoacoustic emissions and tinnitus. Dynamic Mode Decomposition (DMD) is introduced as a useful tool to analyze the response of nonlinear cochlear models. Cochlear response features are illustrated using DMD which has the advantage of explicitly revealing the spatial modes of vibrations occurring in the Basilar Membrane (BM). Finally, we address the dynamic estimation problem of BM vibrations using Extended Kalman Filters (EKF). Due to the limitations of noninvasive sensing schemes, such algorithms are inevitable to estimate the dynamic behavior of a living cochlea.

  9. Space coding for sensorimotor transformations can emerge through unsupervised learning.

    PubMed

    De Filippo De Grazia, Michele; Cutini, Simone; Lisi, Matteo; Zorzi, Marco

    2012-08-01

    The posterior parietal cortex (PPC) is fundamental for sensorimotor transformations because it combines multiple sensory inputs and posture signals into different spatial reference frames that drive motor programming. Here, we present a computational model mimicking the sensorimotor transformations occurring in the PPC. A recurrent neural network with one layer of hidden neurons (restricted Boltzmann machine) learned a stochastic generative model of the sensory data without supervision. After the unsupervised learning phase, the activity of the hidden neurons was used to compute a motor program (a population code on a bidimensional map) through a simple linear projection and delta rule learning. The average motor error, calculated as the difference between the expected and the computed output, was less than 3°. Importantly, analyses of the hidden neurons revealed gain-modulated visual receptive fields, thereby showing that space coding for sensorimotor transformations similar to that observed in the PPC can emerge through unsupervised learning. These results suggest that gain modulation is an efficient coding strategy to integrate visual and postural information toward the generation of motor commands.

  10. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  11. Model-based hierarchical reinforcement learning and human action control

    PubMed Central

    Botvinick, Matthew; Weinstein, Ari

    2014-01-01

    Recent work has reawakened interest in goal-directed or ‘model-based’ choice, where decisions are based on prospective evaluation of potential action outcomes. Concurrently, there has been growing attention to the role of hierarchy in decision-making and action control. We focus here on the intersection between these two areas of interest, considering the topic of hierarchical model-based control. To characterize this form of action control, we draw on the computational framework of hierarchical reinforcement learning, using this to interpret recent empirical findings. The resulting picture reveals how hierarchical model-based mechanisms might play a special and pivotal role in human decision-making, dramatically extending the scope and complexity of human behaviour. PMID:25267822

  12. New generation of elastic network models.

    PubMed

    López-Blanco, José Ramón; Chacón, Pablo

    2016-04-01

    The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  14. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  15. Computational modeling and simulation of spall fracture in polycrystalline solids by an atomistic-based interfacial zone model

    PubMed Central

    Lin, Liqiang; Zeng, Xiaowei

    2015-01-01

    The focus of this work is to investigate spall fracture in polycrystalline materials under high-speed impact loading by using an atomistic-based interfacial zone model. We illustrate that for polycrystalline materials, increases in the potential energy ratio between grain boundaries and grains could cause a fracture transition from intergranular to transgranular mode. We also found out that the spall strength increases when there is a fracture transition from intergranular to transgranular. In addition, analysis of grain size, crystal lattice orientation and impact speed reveals that the spall strength increases as grain size or impact speed increases. PMID:26435546

  16. Multiscaling Edge Effects in an Agent-based Money Emergence Model

    NASA Astrophysics Data System (ADS)

    Oświęcimka, P.; Drożdż, S.; Gębarowski, R.; Górski, A. Z.; Kwapień, J.

    An agent-based computational economical toy model for the emergence of money from the initial barter trading, inspired by Menger's postulate that money can spontaneously emerge in a commodity exchange economy, is extensively studied. The model considered, while manageable, is significantly complex, however. It is already able to reveal phenomena that can be interpreted as emergence and collapse of money as well as the related competition effects. In particular, it is shown that - as an extra emerging effect - the money lifetimes near the critical threshold value develop multiscaling, which allow one to set parallels to critical phenomena and, thus, to the real financial markets.

  17. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  18. Biologically driven neural platform invoking parallel electrophoretic separation and urinary metabolite screening.

    PubMed

    Page, Tessa; Nguyen, Huong Thi Huynh; Hilts, Lindsey; Ramos, Lorena; Hanrahan, Grady

    2012-06-01

    This work reveals a computational framework for parallel electrophoretic separation of complex biological macromolecules and model urinary metabolites. More specifically, the implementation of a particle swarm optimization (PSO) algorithm on a neural network platform for multiparameter optimization of multiplexed 24-capillary electrophoresis technology with UV detection is highlighted. Two experimental systems were examined: (1) separation of purified rabbit metallothioneins and (2) separation of model toluene urinary metabolites and selected organic acids. Results proved superior to the use of neural networks employing standard back propagation when examining training error, fitting response, and predictive abilities. Simulation runs were obtained as a result of metaheuristic examination of the global search space with experimental responses in good agreement with predicted values. Full separation of selected analytes was realized after employing optimal model conditions. This framework provides guidance for the application of metaheuristic computational tools to aid in future studies involving parallel chemical separation and screening. Adaptable pseudo-code is provided to enable users of varied software packages and modeling framework to implement the PSO algorithm for their desired use.

  19. Optimization of the moving-bed biofilm sequencing batch reactor (MBSBR) to control aeration time by kinetic computational modeling: Simulated sugar-industry wastewater treatment.

    PubMed

    Faridnasr, Maryam; Ghanbari, Bastam; Sassani, Ardavan

    2016-05-01

    A novel approach was applied for optimization of a moving-bed biofilm sequencing batch reactor (MBSBR) to treat sugar-industry wastewater (BOD5=500-2500 and COD=750-3750 mg/L) at 2-4 h of cycle time (CT). Although the experimental data showed that MBSBR reached high BOD5 and COD removal performances, it failed to achieve the standard limits at the mentioned CTs. Thus, optimization of the reactor was rendered by kinetic computational modeling and using statistical error indicator normalized root mean square error (NRMSE). The results of NRMSE revealed that Stover-Kincannon (error=6.40%) and Grau (error=6.15%) models provide better fits to the experimental data and may be used for CT optimization in the reactor. The models predicted required CTs of 4.5, 6.5, 7 and 7.5 h for effluent standardization of 500, 1000, 1500 and 2500 mg/L influent BOD5 concentrations, respectively. Similar pattern of the experimental data also confirmed these findings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Operational evaluation of high-throughput community-based mass prophylaxis using Just-in-time training.

    PubMed

    Spitzer, James D; Hupert, Nathaniel; Duckart, Jonathan; Xiong, Wei

    2007-01-01

    Community-based mass prophylaxis is a core public health operational competency, but staffing needs may overwhelm the local trained health workforce. Just-in-time (JIT) training of emergency staff and computer modeling of workforce requirements represent two complementary approaches to address this logistical problem. Multnomah County, Oregon, conducted a high-throughput point of dispensing (POD) exercise to test JIT training and computer modeling to validate POD staffing estimates. The POD had 84% non-health-care worker staff and processed 500 patients per hour. Post-exercise modeling replicated observed staff utilization levels and queue formation, including development and amelioration of a large medical evaluation queue caused by lengthy processing times and understaffing in the first half-hour of the exercise. The exercise confirmed the feasibility of using JIT training for high-throughput antibiotic dispensing clinics staffed largely by nonmedical professionals. Patient processing times varied over the course of the exercise, with important implications for both staff reallocation and future POD modeling efforts. Overall underutilization of staff revealed the opportunity for greater efficiencies and even higher future throughputs.

  1. How Do Students Misunderstand Number Representations?

    ERIC Educational Resources Information Center

    Herman, Geoffrey L.; Zilles, Craig; Loui, Michael C.

    2011-01-01

    We used both student interviews and diagnostic testing to reveal students' misconceptions about number representations in computing systems. This article reveals that students who have passed an undergraduate level computer organization course still possess surprising misconceptions about positional notations, two's complement representation, and…

  2. Infrasound Signals from Ground-Motion Sources

    DTIC Science & Technology

    2008-09-01

    signals as a basis for discriminants between underground nuclear tests ( UGT ) and earthquakes (EQ). In an earlier program, infrasound signals from... UGTs and EQs were collected at ranges of a few hundred kilometers, in the far-field. Analysis of these data revealed two parameters that had potential...well. To study the near-field signals, we are using computational techniques based on modeled ground motions from UGTs and EQs. One is the closed

  3. Computational dissection of human episodic memory reveals mental process-specific genetic profiles

    PubMed Central

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G.; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J.-F.

    2015-01-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory. PMID:26261317

  4. Computational dissection of human episodic memory reveals mental process-specific genetic profiles.

    PubMed

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F

    2015-09-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.

  5. Theories of Spoken Word Recognition Deficits in Aphasia: Evidence from Eye-Tracking and Computational Modeling

    PubMed Central

    Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E.; Magnuson, James S.

    2011-01-01

    We used eye tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot – parrot) and cohort (e.g., beaker – beetle) competitors. Broca’s aphasic participants exhibited larger rhyme competition effects than age-matched controls. A reanalysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke’s aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke’s aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca's aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control. PMID:21371743

  6. Thermal-hydraulics modeling for prototype testing of the W7-X high heat flux scraper element

    DOE PAGES

    Clark, Emily; Lumsdaine, Arnold; Boscary, Jean; ...

    2017-07-28

    The long-pulse operation of the Wendelstein 7-X (W7-X) stellarator experiment is scheduled to begin in 2020. This operational phase will be equipped with water-cooled plasma facing components to allow for longer pulse durations. Certain simulated plasma scenarios have been shown to produce heat fluxes that surpass the technological limits on the edges of the divertor target elements during steady-state operation. In order to reduce the heat load on the target elements, the addition of a “scraper element” (SE) is under investigation. The SE is composed of 24 water-cooled carbon fiber reinforced carbon composite monoblock units. Multiple full-scale prototypes have beenmore » tested in the GLADIS high heat flux test facility. Previous computational studies revealed discrepancies between the simulations and experimental measurements. In this work, single-phase thermal-hydraulics modeling was performed in ANSYS CFX to identify potential causes for such discrepancies. Possible explanations investigated were the effects of a non-uniform thermal contact resistance and a potential misalignment of the monoblock fibers. And while the difference between the experimental and computational results was not resolved by a non-uniform thermal contact resistance, the computational results provided insight into the potential performance of a W7-X monoblock unit. Circumferential temperature distributions highlighted the expected boiling regions of such a unit. Finally, simulations revealed that modest angles of fiber misalignment in the monoblocks result in asymmetries at the unit edges and provide temperature differences similar to the experimental results.« less

  7. Thermal-hydraulics modeling for prototype testing of the W7-X high heat flux scraper element

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Emily; Lumsdaine, Arnold; Boscary, Jean

    The long-pulse operation of the Wendelstein 7-X (W7-X) stellarator experiment is scheduled to begin in 2020. This operational phase will be equipped with water-cooled plasma facing components to allow for longer pulse durations. Certain simulated plasma scenarios have been shown to produce heat fluxes that surpass the technological limits on the edges of the divertor target elements during steady-state operation. In order to reduce the heat load on the target elements, the addition of a “scraper element” (SE) is under investigation. The SE is composed of 24 water-cooled carbon fiber reinforced carbon composite monoblock units. Multiple full-scale prototypes have beenmore » tested in the GLADIS high heat flux test facility. Previous computational studies revealed discrepancies between the simulations and experimental measurements. In this work, single-phase thermal-hydraulics modeling was performed in ANSYS CFX to identify potential causes for such discrepancies. Possible explanations investigated were the effects of a non-uniform thermal contact resistance and a potential misalignment of the monoblock fibers. And while the difference between the experimental and computational results was not resolved by a non-uniform thermal contact resistance, the computational results provided insight into the potential performance of a W7-X monoblock unit. Circumferential temperature distributions highlighted the expected boiling regions of such a unit. Finally, simulations revealed that modest angles of fiber misalignment in the monoblocks result in asymmetries at the unit edges and provide temperature differences similar to the experimental results.« less

  8. Fine-Tuning Tomato Agronomic Properties by Computational Genome Redesign

    PubMed Central

    Carrera, Javier; Fernández del Carmen, Asun; Fernández-Muñoz, Rafael; Rambla, Jose Luis; Pons, Clara; Jaramillo, Alfonso; Elena, Santiago F.; Granell, Antonio

    2012-01-01

    Considering cells as biofactories, we aimed to optimize its internal processes by using the same engineering principles that large industries are implementing nowadays: lean manufacturing. We have applied reverse engineering computational methods to transcriptomic, metabolomic and phenomic data obtained from a collection of tomato recombinant inbreed lines to formulate a kinetic and constraint-based model that efficiently describes the cellular metabolism from expression of a minimal core of genes. Based on predicted metabolic profiles, a close association with agronomic and organoleptic properties of the ripe fruit was revealed with high statistical confidence. Inspired in a synthetic biology approach, the model was used for exploring the landscape of all possible local transcriptional changes with the aim of engineering tomato fruits with fine-tuned biotechnological properties. The method was validated by the ability of the proposed genomes, engineered for modified desired agronomic traits, to recapitulate experimental correlations between associated metabolites. PMID:22685389

  9. Activation pathway of Src kinase reveals intermediate states as novel targets for drug design

    PubMed Central

    Shukla, Diwakar; Meng, Yilin; Roux, Benoît; Pande, Vijay S.

    2014-01-01

    Unregulated activation of Src kinases leads to aberrant signaling, uncontrolled growth, and differentiation of cancerous cells. Reaching a complete mechanistic understanding of large scale conformational transformations underlying the activation of kinases could greatly help in the development of therapeutic drugs for the treatment of these pathologies. In principle, the nature of conformational transition could be modeled in silico via atomistic molecular dynamics simulations, although this is very challenging due to the long activation timescales. Here, we employ a computational paradigm that couples transition pathway techniques and Markov state model-based massively distributed simulations for mapping the conformational landscape of c-src tyrosine kinase. The computations provide the thermodynamics and kinetics of kinase activation for the first time, and help identify key structural intermediates. Furthermore, the presence of a novel allosteric site in an intermediate state of c-src that could be potentially utilized for drug design is predicted. PMID:24584478

  10. Computational Research on Mobile Pastoralism Using Agent-Based Modeling and Satellite Imagery.

    PubMed

    Sakamoto, Takuto

    2016-01-01

    Dryland pastoralism has long attracted considerable attention from researchers in diverse fields. However, rigorous formal study is made difficult by the high level of mobility of pastoralists as well as by the sizable spatio-temporal variability of their environment. This article presents a new computational approach for studying mobile pastoralism that overcomes these issues. Combining multi-temporal satellite images and agent-based modeling allows a comprehensive examination of pastoral resource access over a realistic dryland landscape with unpredictable ecological dynamics. The article demonstrates the analytical potential of this approach through its application to mobile pastoralism in northeast Nigeria. Employing more than 100 satellite images of the area, extensive simulations are conducted under a wide array of circumstances, including different land-use constraints. The simulation results reveal complex dependencies of pastoral resource access on these circumstances along with persistent patterns of seasonal land use observed at the macro level.

  11. Computational Research on Mobile Pastoralism Using Agent-Based Modeling and Satellite Imagery

    PubMed Central

    Sakamoto, Takuto

    2016-01-01

    Dryland pastoralism has long attracted considerable attention from researchers in diverse fields. However, rigorous formal study is made difficult by the high level of mobility of pastoralists as well as by the sizable spatio-temporal variability of their environment. This article presents a new computational approach for studying mobile pastoralism that overcomes these issues. Combining multi-temporal satellite images and agent-based modeling allows a comprehensive examination of pastoral resource access over a realistic dryland landscape with unpredictable ecological dynamics. The article demonstrates the analytical potential of this approach through its application to mobile pastoralism in northeast Nigeria. Employing more than 100 satellite images of the area, extensive simulations are conducted under a wide array of circumstances, including different land-use constraints. The simulation results reveal complex dependencies of pastoral resource access on these circumstances along with persistent patterns of seasonal land use observed at the macro level. PMID:26963526

  12. Computational Substrates of Social Norm Enforcement by Unaffected Third Parties

    PubMed Central

    Zhong, Songfa; Chark, Robin; Hsu, Ming; Chew, Soo Hong

    2016-01-01

    Enforcement of social norms by impartial bystanders in the human species reveals a possibly unique capacity to sense and to enforce norms from a third party perspective. Such behavior, however, cannot be accounted by current computational models based on an egocentric notion of norms. Here, using a combination of model-based fMRI and third party punishment games, we show that brain regions previously implicated in egocentric norm enforcement critically extend to the important case of norm enforcement by unaffected third parties. Specifically, we found that responses in the ACC and insula cortex were positively associated with detection of distributional inequity, while those in the anterior DLPFC were associated with assessment of intentionality to the violator. Moreover, during sanction decisions, the subjective value of sanctions modulated activity in both vmPFC and rTPJ. These results shed light on the neurocomputational underpinnings of third party punishment and evolutionary origin of human norm enforcement. PMID:26825438

  13. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  14. Combined multifrequency EPR and DFT study of dangling bonds in a-Si:H

    NASA Astrophysics Data System (ADS)

    Fehr, M.; Schnegg, A.; Rech, B.; Lips, K.; Astakhov, O.; Finger, F.; Pfanner, G.; Freysoldt, C.; Neugebauer, J.; Bittl, R.; Teutloff, C.

    2011-12-01

    Multifrequency pulsed electron paramagnetic resonance (EPR) spectroscopy using S-, X-, Q-, and W-band frequencies (3.6, 9.7, 34, and 94 GHz, respectively) was employed to study paramagnetic coordination defects in undoped hydrogenated amorphous silicon (a-Si:H). The improved spectral resolution at high magnetic field reveals a rhombic splitting of the g tensor with the following principal values: gx=2.0079, gy=2.0061, and gz=2.0034, and shows pronounced g strain, i.e., the principal values are widely distributed. The multifrequency approach furthermore yields precise 29Si hyperfine data. Density functional theory (DFT) calculations on 26 computer-generated a-Si:H dangling-bond models yielded g values close to the experimental data but deviating hyperfine interaction values. We show that paramagnetic coordination defects in a-Si:H are more delocalized than computer-generated dangling-bond defects and discuss models to explain this discrepancy.

  15. Computational design of active, self-reinforcing gels.

    PubMed

    Yashin, Victor V; Kuksenok, Olga; Balazs, Anna C

    2010-05-20

    Many living organisms have evolved a protective mechanism that allows them to reversibly alter their stiffness in response to mechanical contact. Using theoretical modeling, we design a mechanoresponsive polymer gel that exhibits a similar self-reinforcing behavior. We focus on cross-linked gels that contain Ru(terpy)(2) units, where both terpyridine ligands are grafted to the chains. The Ru(terpy)(2) complex forms additional, chemoresponsive cross-links that break and re-form in response to a repeated oxidation and reduction of the Ru. In our model, the periodic redox variations of the anchored metal ion are generated by the Belousov-Zhabotinsky (BZ) reaction. Our computer simulations reveal that compression of the BZ gel leads to a stiffening of the sample due to an increase in the cross-link density. These findings provide guidelines for designing biomimetic, active coatings that send out a signal when the system is impacted and use this signaling process to initiate the self-protecting behavior.

  16. Improved Spectral Calculations for Discrete Schrődinger Operators

    NASA Astrophysics Data System (ADS)

    Puelz, Charles

    This work details an O(n2) algorithm for computing spectra of discrete Schrődinger operators with periodic potentials. Spectra of these objects enhance our understanding of fundamental aperiodic physical systems and contain rich theoretical structure of interest to the mathematical community. Previous work on the Harper model led to an O(n2) algorithm relying on properties not satisfied by other aperiodic operators. Physicists working with the Fibonacci Hamiltonian, a popular quasicrystal model, have instead used a problematic dynamical map approach or a sluggish O(n3) procedure for their calculations. The algorithm presented in this work, a blend of well-established eigenvalue/vector algorithms, provides researchers with a more robust computational tool of general utility. Application to the Fibonacci Hamiltonian in the sparsely studied intermediate coupling regime reveals structure in canonical coverings of the spectrum that will prove useful in motivating conjectures regarding band combinatorics and fractal dimensions.

  17. Stochastic competitive learning in complex networks.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2012-03-01

    Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning..

  18. Fundamental analysis of the failure of polymer-based fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Kanninen, M. F.; Rybicki, E. F.; Griffith, W. I.; Broek, D.

    1975-01-01

    A mathematical model predicting the strength of unidirectional fiber reinforced composites containing known flaws and with linear elastic-brittle material behavior was developed. The approach was to imbed a local heterogeneous region surrounding the crack tip into an anisotropic elastic continuum. This (1) permits an explicit analysis of the micromechanical processes involved in the fracture, and (2) remains simple enough to be useful in practical computations. Computations for arbitrary flaw size and orientation under arbitrary applied loads were performed. The mechanical properties were those of graphite epoxy. With the rupture properties arbitrarily varied to test the capabilities of the model to reflect real fracture modes, it was shown that fiber breakage, matrix crazing, crack bridging, matrix-fiber debonding, and axial splitting can all occur during a period of (gradually) increasing load prior to catastrophic failure. The calculations also reveal the sequential nature of the stable crack growth process proceding fracture.

  19. Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells.

    PubMed

    Buettner, Florian; Natarajan, Kedar N; Casale, F Paolo; Proserpio, Valentina; Scialdone, Antonio; Theis, Fabian J; Teichmann, Sarah A; Marioni, John C; Stegle, Oliver

    2015-02-01

    Recent technical developments have enabled the transcriptomes of hundreds of cells to be assayed in an unbiased manner, opening up the possibility that new subpopulations of cells can be found. However, the effects of potential confounding factors, such as the cell cycle, on the heterogeneity of gene expression and therefore on the ability to robustly identify subpopulations remain unclear. We present and validate a computational approach that uses latent variable models to account for such hidden factors. We show that our single-cell latent variable model (scLVM) allows the identification of otherwise undetectable subpopulations of cells that correspond to different stages during the differentiation of naive T cells into T helper 2 cells. Our approach can be used not only to identify cellular subpopulations but also to tease apart different sources of gene expression heterogeneity in single-cell transcriptomes.

  20. A combined computational and structural model of the full-length human prolactin receptor

    PubMed Central

    Bugge, Katrine; Papaleo, Elena; Haxholm, Gitte W.; Hopper, Jonathan T. S.; Robinson, Carol V.; Olsen, Johan G.; Lindorff-Larsen, Kresten; Kragelund, Birthe B.

    2016-01-01

    The prolactin receptor is an archetype member of the class I cytokine receptor family, comprising receptors with fundamental functions in biology as well as key drug targets. Structurally, each of these receptors represent an intriguing diversity, providing an exceptionally challenging target for structural biology. Here, we access the molecular architecture of the monomeric human prolactin receptor by combining experimental and computational efforts. We solve the NMR structure of its transmembrane domain in micelles and collect structural data on overlapping fragments of the receptor with small-angle X-ray scattering, native mass spectrometry and NMR spectroscopy. Along with previously published data, these are integrated by molecular modelling to generate a full receptor structure. The result provides the first full view of a class I cytokine receptor, exemplifying the architecture of more than 40 different receptor chains, and reveals that the extracellular domain is merely the tip of a molecular iceberg. PMID:27174498

  1. First Release of Gravimetric Geoid Model over Saudi Arabia Based on Terrestrial Gravity and GOCE Satellite Data: KSAG01

    NASA Astrophysics Data System (ADS)

    Alothman, A. O.; Elsaka, B.

    2015-12-01

    A new gravimetric quasi-geoid, known as KSAG0, has been developed recently by Remove-Compute-Restore techniques (RCR), provided by the GRAVSOFT software, using gravimetric free air anomalies. The terrestrial gravity data used in this computations are: 1145 gravity field anomalies observed by ARAMCO (Saudi Arabian Oil Company) and 2470 Gravity measurements from BGI (Bureau Gravimétrique International). The computations were carried out implementing the least squares collocation method through the RCR techniques. The KSAG01 is based on merging in addition to the terrestrial gravity observations, GOCE satellite model (Eigen-6C4) and global gravity model (EGM2008) have been utilized in the computations. The long, medium and short wavelength spectrum of the height anomalies were compensated from Eigen-6C4 and EGM2008 geoid models truncated up to Degree and order (d/o) up to 2190. KSAG01 geoid covers 100 per cent of the kingdom, with geoid heights range from - 37.513 m in the southeast to 23.183 m in the northwest of the country. The accuracy of the geoid is governed by the accuracy, distribution, and spacing of the observations. The standard deviation of the predicted geoid heights is 0.115 m, with maximum errors of about 0.612 m. The RMS of geoid noise ranges from 0.019 m to 0.04 m. Comparison of the predicted gravimetric geoid with EGM, GOCE, and GPS/Levelling geoids, reveals a considerable improvements of the quasi-geoid heights over Saudi Arabia.

  2. First Release of Gravimetric Geoid Model over Saudi Arabia Based on Terrestrial Gravity and GOCE Satellite Data: KSAG01

    NASA Astrophysics Data System (ADS)

    Alothman, Abdulaziz; Elsaka, Basem

    2016-04-01

    A new gravimetric quasi-geoid, known as KSAG0, has been developed recently by Remove-Compute-Restore techniques (RCR), provided by the GRAVSOFT software, using gravimetric free air anomalies. The terrestrial gravity data used in this computations are: 1145 gravity field anomalies observed by ARAMCO (Saudi Arabian Oil Company) and 2470 Gravity measurements from BGI (Bureau Gravimétrique International). The computations were carried out implementing the least squares collocation method through the RCR techniques. The KSAG01 is based on merging in addition to the terrestrial gravity observations, GOCE satellite model (Eigen-6C4) and global gravity model (EGM2008) have been utilized in the computations. The long, medium and short wavelength spectrum of the height anomalies were compensated from Eigen-6C4 and EGM2008 geoid models truncated up to Degree and order (d/o) up to 2190. KSAG01 geoid covers 100 per cent of the kingdom, with geoid heights range from - 37.513 m in the southeast to 23.183 m in the northwest of the country. The accuracy of the geoid is governed by the accuracy, distribution, and spacing of the observations. The standard deviation of the predicted geoid heights is 0.115 m, with maximum errors of about 0.612 m. The RMS of geoid noise ranges from 0.019 m to 0.04 m. Comparison of the predicted gravimetric geoid with EGM, GOCE, and GPS/Levelling geoids, reveals a considerable improvements of the quasi-geoid heights over Saudi Arabia.

  3. BEM-based simulation of lung respiratory deformation for CT-guided biopsy.

    PubMed

    Chen, Dong; Chen, Weisheng; Huang, Lipeng; Feng, Xuegang; Peters, Terry; Gu, Lixu

    2017-09-01

    Accurate and real-time prediction of the lung and lung tumor deformation during respiration are important considerations when performing a peripheral biopsy procedure. However, most existing work focused on offline whole lung simulation using 4D image data, which is not applicable in real-time image-guided biopsy with limited image resources. In this paper, we propose a patient-specific biomechanical model based on the boundary element method (BEM) computed from CT images to estimate the respiration motion of local target lesion region, vessel tree and lung surface for the real-time biopsy guidance. This approach applies pre-computation of various BEM parameters to facilitate the requirement for real-time lung motion simulation. The resulting boundary condition at end inspiratory phase is obtained using a nonparametric discrete registration with convex optimization, and the simulation of the internal tissue is achieved by applying a tetrahedron-based interpolation method depend on expert-determined feature points on the vessel tree model. A reference needle is tracked to update the simulated lung motion during biopsy guidance. We evaluate the model by applying it for respiratory motion estimations of ten patients. The average symmetric surface distance (ASSD) and the mean target registration error (TRE) are employed to evaluate the proposed model. Results reveal that it is possible to predict the lung motion with ASSD of [Formula: see text] mm and a mean TRE of [Formula: see text] mm at largest over the entire respiratory cycle. In the CT-/electromagnetic-guided biopsy experiment, the whole process was assisted by our BEM model and final puncture errors in two studies were 3.1 and 2.0 mm, respectively. The experiment results reveal that both the accuracy of simulation and real-time performance meet the demands of clinical biopsy guidance.

  4. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    PubMed

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  5. Saudi high school students' attitudes and barriers toward the use of computer technologies in learning English.

    PubMed

    Sabti, Ahmed Abdulateef; Chaichan, Rasha Sami

    2014-01-01

    This study examines the attitudes of Saudi Arabian high school students toward the use of computer technologies in learning English. The study also discusses the possible barriers that affect and limit the actual usage of computers. Quantitative approach is applied in this research, which involved 30 Saudi Arabia students of a high school in Kuala Lumpur, Malaysia. The respondents comprised 15 males and 15 females with ages between 16 years and 18 years. Two instruments, namely, Scale of Attitude toward Computer Technologies (SACT) and Barriers affecting Students' Attitudes and Use (BSAU) were used to collect data. The Technology Acceptance Model (TAM) of Davis (1989) was utilized. The analysis of the study revealed gender differences in attitudes toward the use of computer technologies in learning English. Female students showed high and positive attitudes towards the use of computer technologies in learning English than males. Both male and female participants demonstrated high and positive perception of Usefulness and perceived Ease of Use of computer technologies in learning English. Three barriers that affected and limited the use of computer technologies in learning English were identified by the participants. These barriers are skill, equipment, and motivation. Among these barriers, skill had the highest effect, whereas motivation showed the least effect.

  6. DirtyGrid I: 3D Dust Radiative Transfer Modeling of Spectral Energy Distributions of Dusty Stellar Populations

    NASA Astrophysics Data System (ADS)

    Law, Ka-Hei; Gordon, Karl D.; Misselt, Karl A.

    2018-06-01

    Understanding the properties of stellar populations and interstellar dust has important implications for galaxy evolution. In normal star-forming galaxies, stars and the interstellar medium dominate the radiation from ultraviolet (UV) to infrared (IR). In particular, interstellar dust absorbs and scatters UV and optical light, re-emitting the absorbed energy in the IR. This is a strongly nonlinear process that makes independent studies of the UV-optical and IR susceptible to large uncertainties and degeneracies. Over the years, UV to IR spectral energy distribution (SED) fitting utilizing varying approximations has revealed important results on the stellar and dust properties of galaxies. Yet the approximations limit the fidelity of the derived properties. There is sufficient computer power now available that it is now possible to remove these approximations and map out of landscape of galaxy SEDs using full dust radiative transfer. This improves upon previous work by directly connecting the UV, optical, and IR through dust grain physics. We present the DIRTYGrid, a grid of radiative transfer models of SEDs of dusty stellar populations in galactic environments designed to span the full range of physical parameters of galaxies. Using the stellar and gas radiation input from the stellar population synthesis model PEGASE, our radiative transfer model DIRTY self-consistently computes the UV to far-IR/sub-mm SEDs for each set of parameters in our grid. DIRTY computes the dust absorption, scattering, and emission from the local radiation field and a dust grain model, thereby physically connecting the UV-optical to the IR. We describe the computational method and explain the choices of parameters in DIRTYGrid. The computation took millions of CPU hours on supercomputers, and the SEDs produced are an invaluable tool for fitting multi-wavelength data sets. We provide the complete set of SEDs in an online table.

  7. Patient-specific computational modeling of blood flow in the pulmonary arterial circulation.

    PubMed

    Kheyfets, Vitaly O; Rios, Lourdes; Smith, Triston; Schroeder, Theodore; Mueller, Jeffrey; Murali, Srinivas; Lasorda, David; Zikos, Anthony; Spotti, Jennifer; Reilly, John J; Finol, Ender A

    2015-07-01

    Computational fluid dynamics (CFD) modeling of the pulmonary vasculature has the potential to reveal continuum metrics associated with the hemodynamic stress acting on the vascular endothelium. It is widely accepted that the endothelium responds to flow-induced stress by releasing vasoactive substances that can dilate and constrict blood vessels locally. The objectives of this study are to examine the extent of patient specificity required to obtain a significant association of CFD output metrics and clinical measures in models of the pulmonary arterial circulation, and to evaluate the potential correlation of wall shear stress (WSS) with established metrics indicative of right ventricular (RV) afterload in pulmonary hypertension (PH). Right Heart Catheterization (RHC) hemodynamic data and contrast-enhanced computed tomography (CT) imaging were retrospectively acquired for 10 PH patients and processed to simulate blood flow in the pulmonary arteries. While conducting CFD modeling of the reconstructed patient-specific vasculatures, we experimented with three different outflow boundary conditions to investigate the potential for using computationally derived spatially averaged wall shear stress (SAWSS) as a metric of RV afterload. SAWSS was correlated with both pulmonary vascular resistance (PVR) (R(2)=0.77, P<0.05) and arterial compliance (C) (R(2)=0.63, P<0.05), but the extent of the correlation was affected by the degree of patient specificity incorporated in the fluid flow boundary conditions. We found that decreasing the distal PVR alters the flow distribution and changes the local velocity profile in the distal vessels, thereby increasing the local WSS. Nevertheless, implementing generic outflow boundary conditions still resulted in statistically significant SAWSS correlations with respect to both metrics of RV afterload, suggesting that the CFD model could be executed without the need for complex outflow boundary conditions that require invasively obtained patient-specific data. A preliminary study investigating the relationship between outlet diameter and flow distribution in the pulmonary tree offers a potential computationally inexpensive alternative to pressure based outflow boundary conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Sensitivity to the Sampling Process Emerges From the Principle of Efficiency.

    PubMed

    Jara-Ettinger, Julian; Sun, Felix; Schulz, Laura; Tenenbaum, Joshua B

    2018-05-01

    Humans can seamlessly infer other people's preferences, based on what they do. Broadly, two types of accounts have been proposed to explain different aspects of this ability. The first account focuses on spatial information: Agents' efficient navigation in space reveals what they like. The second account focuses on statistical information: Uncommon choices reveal stronger preferences. Together, these two lines of research suggest that we have two distinct capacities for inferring preferences. Here we propose that this is not the case, and that spatial-based and statistical-based preference inferences can be explained by the assumption that agents are efficient alone. We show that people's sensitivity to spatial and statistical information when they infer preferences is best predicted by a computational model of the principle of efficiency, and that this model outperforms dual-system models, even when the latter are fit to participant judgments. Our results suggest that, as adults, a unified understanding of agency under the principle of efficiency underlies our ability to infer preferences. Copyright © 2018 Cognitive Science Society, Inc.

  9. Applying a Particle-only Model to the HL Tau Disk

    NASA Astrophysics Data System (ADS)

    Tabeshian, Maryam; Wiegert, Paul A.

    2018-04-01

    Observations have revealed rich structures in protoplanetary disks, offering clues about their embedded planets. Due to the complexities introduced by the abundance of gas in these disks, modeling their structure in detail is computationally intensive, requiring complex hydrodynamic codes and substantial computing power. It would be advantageous if computationally simpler models could provide some preliminary information on these disks. Here we apply a particle-only model (that we developed for gas-poor debris disks) to the gas-rich disk, HL Tauri, to address the question of whether such simple models can inform the study of these systems. Assuming three potentially embedded planets, we match HL Tau’s radial profile fairly well and derive best-fit planetary masses and orbital radii (0.40, 0.02, 0.21 Jupiter masses for the planets orbiting a 0.55 M ⊙ star at 11.22, 29.67, 64.23 au). Our derived parameters are comparable to those estimated by others, except for the mass of the second planet. Our simulations also reproduce some narrower gaps seen in the ALMA image away from the orbits of the planets. The nature of these gaps is debated but, based on our simulations, we argue they could result from planet–disk interactions via mean-motion resonances, and need not contain planets. Our results suggest that a simple particle-only model can be used as a first step to understanding dynamical structures in gas disks, particularly those formed by planets, and determine some parameters of their hidden planets, serving as useful initial inputs to hydrodynamic models which are needed to investigate disk and planet properties more thoroughly.

  10. Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers

    PubMed Central

    Jiang, Yong; Schmidt, Renate H.; Reif, Jochen C.

    2018-01-01

    Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. PMID:29549092

  11. Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers.

    PubMed

    Jiang, Yong; Schmidt, Renate H; Reif, Jochen C

    2018-05-04

    Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. Copyright © 2018 Jiang et al.

  12. Computational prediction of hemolysis in a centrifugal ventricular assist device.

    PubMed

    Pinotti, M; Rosa, E S

    1995-03-01

    This paper describes the use of computational fluid dynamics (CFD) to predict numerically the hemolysis in centrifugal pumps. A numerical hydrodynamical model, based on the full Navier-Stokes equation, was used to obtain the flow in a vaneless centrifugal pump (of corotating disks type). After proper postprocessing, critical zones in the channel were identified by means of two-dimensional color-coded maps of %Hb release. Simulation of different conditions revealed that flow behavior at the entrance region of the channel is the main cause of blood trauma in such devices. A useful feature resulting from the CFD simulation is the visualization of critical flow zones that are impossible to determine experimentally with in vitro hemolysis tests.

  13. Coexistence of Native and Denatured Phases in a Single Proteinlike Molecule

    NASA Astrophysics Data System (ADS)

    Du, Rose; Grosberg, Alexander Yu.; Tanaka, Toyoichi

    1999-11-01

    In order to understand the nuclei which develop during the course of protein folding and unfolding, we examine equilibrium coexistence of phases within a single heteropolymer chain. We computationally generate the phase segregation by applying a ``folding pressure,'' or adding an energetic bonus for native monomer-monomer contacts. The computer models reveal that in a polymer system some nuclei hinder folding via topological constraints. Using this insight, we show that the critical nucleus size is of the order of the entire chain and that unfolding time scales as exp\\(cN2/3\\), in the large N limit, N and c being the chain length and a constant, respectively.

  14. Two Dimensional Mechanism for Insect Hovering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jane Wang, Z.

    2000-09-04

    Resolved computation of two dimensional insect hovering shows for the first time that a two dimensional hovering motion can generate enough lift to support a typical insect weight. The computation reveals a two dimensional mechanism of creating a downward dipole jet of counterrotating vortices, which are formed from leading and trailing edge vortices. The vortex dynamics further elucidates the role of the phase relation between the wing translation and rotation in lift generation and explains why the instantaneous forces can reach a periodic state after only a few strokes. The model predicts the lower limits in Reynolds number and amplitudemore » above which the averaged forces are sufficient. (c) 2000 The American Physical Society.« less

  15. Facilitating Preschoolers' Scientific Knowledge Construction via Computer Games Regarding Light and Shadow: The Effect of the Prediction-Observation-Explanation (POE) Strategy

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Yuan; Tsai, Chin-Chung; Liang, Jyh-Chong

    2011-10-01

    Educational researchers have suggested that computer games have a profound influence on students' motivation, knowledge construction, and learning performance, but little empirical research has targeted preschoolers. Thus, the purpose of the present study was to investigate the effects of implementing a computer game that integrates the prediction-observation-explanation (POE) strategy (White and Gunstone in Probing understanding. Routledge, New York, 1992) on facilitating preschoolers' acquisition of scientific concepts regarding light and shadow. The children's alternative conceptions were explored as well. Fifty participants were randomly assigned into either an experimental group that played a computer game integrating the POE model or a control group that played a non-POE computer game. By assessing the students' conceptual understanding through interviews, this study revealed that the students in the experimental group significantly outperformed their counterparts in the concepts regarding "shadow formation in daylight" and "shadow orientation." However, children in both groups, after playing the games, still expressed some alternative conceptions such as "Shadows always appear behind a person" and "Shadows should be on the same side as the sun."

  16. The role of alpha-rhythm states in perceptual learning: insights from experiments and computational models

    PubMed Central

    Sigala, Rodrigo; Haufe, Sebastian; Roy, Dipanjan; Dinse, Hubert R.; Ritter, Petra

    2014-01-01

    During the past two decades growing evidence indicates that brain oscillations in the alpha band (~10 Hz) not only reflect an “idle” state of cortical activity, but also take a more active role in the generation of complex cognitive functions. A recent study shows that more than 60% of the observed inter-subject variability in perceptual learning can be ascribed to ongoing alpha activity. This evidence indicates a significant role of alpha oscillations for perceptual learning and hence motivates to explore the potential underlying mechanisms. Hence, it is the purpose of this review to highlight existent evidence that ascribes intrinsic alpha oscillations a role in shaping our ability to learn. In the review, we disentangle the alpha rhythm into different neural signatures that control information processing within individual functional building blocks of perceptual learning. We further highlight computational studies that shed light on potential mechanisms regarding how alpha oscillations may modulate information transfer and connectivity changes relevant for learning. To enable testing of those model based hypotheses, we emphasize the need for multidisciplinary approaches combining assessment of behavior and multi-scale neuronal activity, active modulation of ongoing brain states and computational modeling to reveal the mathematical principles of the complex neuronal interactions. In particular we highlight the relevance of multi-scale modeling frameworks such as the one currently being developed by “The Virtual Brain” project. PMID:24772077

  17. Parallel tempering simulation of the three-dimensional Edwards-Anderson model with compact asynchronous multispin coding on GPU

    NASA Astrophysics Data System (ADS)

    Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark

    2014-10-01

    Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.

  18. Study of the counting efficiency of a WBC setup by using a computational 3D human body library in sitting position based on polygonal mesh surfaces.

    PubMed

    Fonseca, T C Ferreira; Bogaerts, R; Lebacq, A L; Mihailescu, C L; Vanhavere, F

    2014-04-01

    A realistic computational 3D human body library, called MaMP and FeMP (Male and Female Mesh Phantoms), based on polygonal mesh surface geometry, has been created to be used for numerical calibration of the whole body counter (WBC) system of the nuclear power plant (NPP) in Doel, Belgium. The main objective was to create flexible computational models varying in gender, body height, and mass for studying the morphology-induced variation of the detector counting efficiency (CE) and reducing the measurement uncertainties. First, the counting room and an HPGe detector were modeled using MCNPX (Monte Carlo radiation transport code). The validation of the model was carried out for different sample-detector geometries with point sources and a physical phantom. Second, CE values were calculated for a total of 36 different mesh phantoms in a seated position using the validated Monte Carlo model. This paper reports on the validation process of the in vivo whole body system and the CE calculated for different body heights and weights. The results reveal that the CE is strongly dependent on the individual body shape, size, and gender and may vary by a factor of 1.5 to 3 depending on the morphology aspects of the individual to be measured.

  19. Evaluating the role of coherent delocalized phonon-like modes in DNA cyclization

    DOE PAGES

    Alexandrov, Ludmil B.; Rasmussen, Kim Ø.; Bishop, Alan R.; ...

    2017-08-29

    The innate flexibility of a DNA sequence is quantified by the Jacobson-Stockmayer’s J-factor, which measures the propensity for DNA loop formation. Recent studies of ultra-short DNA sequences revealed a discrepancy of up to six orders of magnitude between experimentally measured and theoretically predicted J-factors. These large differences suggest that, in addition to the elastic moduli of the double helix, other factors contribute to loop formation. We develop a new theoretical model that explores how coherent delocalized phonon-like modes in DNA provide single-stranded ”flexible hinges” to assist in loop formation. We also combine the Czapla-Swigon-Olson structural model of DNA with ourmore » extended Peyrard-Bishop-Dauxois model and, without changing any of the parameters of the two models, apply this new computational framework to 86 experimentally characterized DNA sequences. Our results demonstrate that the new computational framework can predict J-factors within an order of magnitude of experimental measurements for most ultra-short DNA sequences, while continuing to accurately describe the J-factors of longer sequences. Furthermore, we demonstrate that our computational framework can be used to describe the cyclization of DNA sequences that contain a base pair mismatch. Overall, our results support the conclusion that coherent delocalized phonon-like modes play an important role in DNA cyclization.« less

  20. Evaluating the role of coherent delocalized phonon-like modes in DNA cyclization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Ludmil B.; Rasmussen, Kim Ø.; Bishop, Alan R.

    The innate flexibility of a DNA sequence is quantified by the Jacobson-Stockmayer’s J-factor, which measures the propensity for DNA loop formation. Recent studies of ultra-short DNA sequences revealed a discrepancy of up to six orders of magnitude between experimentally measured and theoretically predicted J-factors. These large differences suggest that, in addition to the elastic moduli of the double helix, other factors contribute to loop formation. We develop a new theoretical model that explores how coherent delocalized phonon-like modes in DNA provide single-stranded ”flexible hinges” to assist in loop formation. We also combine the Czapla-Swigon-Olson structural model of DNA with ourmore » extended Peyrard-Bishop-Dauxois model and, without changing any of the parameters of the two models, apply this new computational framework to 86 experimentally characterized DNA sequences. Our results demonstrate that the new computational framework can predict J-factors within an order of magnitude of experimental measurements for most ultra-short DNA sequences, while continuing to accurately describe the J-factors of longer sequences. Furthermore, we demonstrate that our computational framework can be used to describe the cyclization of DNA sequences that contain a base pair mismatch. Overall, our results support the conclusion that coherent delocalized phonon-like modes play an important role in DNA cyclization.« less

  1. Femoral articular shape and geometry. A three-dimensional computerized analysis of the knee.

    PubMed

    Siu, D; Rudan, J; Wevers, H W; Griffiths, P

    1996-02-01

    An average, three-dimensional anatomic shape and geometry of the distal femur were generated from x-ray computed tomography data of five fresh asymptomatic cadaver knees using AutoCAD (AutoDesk, Sausalito, CA), a computer-aided design and drafting software. Each femur model was graphically repositioned to a standardized orientation using a series of alignment templates and scaled to a nominal size of 85 mm in mediolateral and 73 mm in anteroposterior dimensions. An average generic shape of the distal femur was synthesized by combining these pseudosolid models and reslicing the composite structure at different elevations using clipping and smoothing techniques in interactive computer graphics. The resulting distal femoral geometry was imported into a computer-aided manufacturing system, and anatomic prototypes of the distal femur were produced. Quantitative geometric analyses of the generic femur in the coronal and transverse planes revealed definite condylar camber (3 degrees-6 degrees) and toe-in (8 degrees-10 degrees) with an oblique patellofemoral groove (15 degrees) with respect to the mechanical axis of the femur. In the sagittal plane, each condyle could be approximated by three concatenated circular arcs (anterior, distal, and posterior) with slope continuity and a single arc for the patellofemoral groove. The results of this study may have important implications in future femoral prosthesis design and clinical applications.

  2. Computational determination of the binding mode of α-conotoxin to nicotinic acetylcholine receptor

    NASA Astrophysics Data System (ADS)

    Tabassum, Nargis; Yu, Rilei; Jiang, Tao

    2016-12-01

    Conotoxins belong to the large families of disulfide-rich peptide toxins from cone snail venom, and can act on a broad spectrum of ion channels and receptors. They are classified into different subtypes based on their targets. The α-conotoxins selectively inhibit the current of the nicotinic acetylcholine receptors. Because of their unique selectivity towards distinct nAChR subtypes, α-conotoxins become valuable tools in nAChR study. In addition to the X-ray structures of α-conotoxins in complex with acetylcholine-binding protein, a homolog of the nAChR ligand-binding domain, the high-resolution crystal structures of the extracellular domain of the α1 and α9 subunits are also obtained. Such structures not only revealed the details of the configuration of nAChR, but also provided higher sequence identity templates for modeling the binding modes of α-conotoxins to nAChR. This mini-review summarizes recent modeling studies for the determination of the binding modes of α-conotoxins to nAChR. As there are not crystal structures of the nAChR in complex with conotoxins, computational modeling in combination of mutagenesis data is expected to reveal the molecular recognition mechanisms that govern the interactions between α-conotoxins and nAChR at molecular level. An accurate determination of the binding modes of α-conotoxins on AChRs allows rational design of α-conotoxin analogues with improved potency or selectivity to nAChRs.

  3. Experimentally valid predictions of muscle force and EMG in models of motor-unit function are most sensitive to neural properties.

    PubMed

    Keenan, Kevin G; Valero-Cuevas, Francisco J

    2007-09-01

    Computational models of motor-unit populations are the objective implementations of the hypothesized mechanisms by which neural and muscle properties give rise to electromyograms (EMGs) and force. However, the variability/uncertainty of the parameters used in these models--and how they affect predictions--confounds assessing these hypothesized mechanisms. We perform a large-scale computational sensitivity analysis on the state-of-the-art computational model of surface EMG, force, and force variability by combining a comprehensive review of published experimental data with Monte Carlo simulations. To exhaustively explore model performance and robustness, we ran numerous iterative simulations each using a random set of values for nine commonly measured motor neuron and muscle parameters. Parameter values were sampled across their reported experimental ranges. Convergence after 439 simulations found that only 3 simulations met our two fitness criteria: approximating the well-established experimental relations for the scaling of EMG amplitude and force variability with mean force. An additional 424 simulations preferentially sampling the neighborhood of those 3 valid simulations converged to reveal 65 additional sets of parameter values for which the model predictions approximate the experimentally known relations. We find the model is not sensitive to muscle properties but very sensitive to several motor neuron properties--especially peak discharge rates and recruitment ranges. Therefore to advance our understanding of EMG and muscle force, it is critical to evaluate the hypothesized neural mechanisms as implemented in today's state-of-the-art models of motor unit function. We discuss experimental and analytical avenues to do so as well as new features that may be added in future implementations of motor-unit models to improve their experimental validity.

  4. 1D Atmosphere Models from Inversion of Fe i 630 nm Observations with an Application to Solar Irradiance Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristaldi, Alice; Ermolli, Ilaria, E-mail: alice.cristaldi@oaroma.inaf.it

    Present-day semi-empirical models of solar irradiance (SI) variations reconstruct SI changes measured on timescales greater than a day by using spectra computed in one dimensional atmosphere models (1D models), which are representative of various solar surface features. Various recent studies have pointed out, however, that the spectra synthesized in 1D models do not reflect the radiative emission of the inhomogenous atmosphere revealed by high-resolution solar observations. We aimed to derive observation-based atmospheres from such observations and test their accuracy for SI estimates. We analyzed spectropolarimetric data of the Fe i 630 nm line pair in photospheric regions that are representativemore » of the granular quiet-Sun pattern (QS) and of small- and large-scale magnetic features, both bright and dark with respect to the QS. The data were taken on 2011 August 6, with the CRisp Imaging Spectropolarimeter at the Swedish Solar Telescope, under excellent seeing conditions. We derived atmosphere models of the observed regions from data inversion with the SIR code. We studied the sensitivity of results to spatial resolution and temporal evolution, and discuss the obtained atmospheres with respect to several 1D models. The atmospheres derived from our study agree well with most of the 1D models we compare our results with, both qualitatively and quantitatively (within 10%), except for pore regions. Spectral synthesis computations of the atmosphere obtained from the QS observations return an SI between 400 and 2400 nm that agrees, on average, within 2.2% with standard reference measurements, and within −0.14% with the SI computed on the QS atmosphere employed by the most advanced semi-empirical model of SI variations.« less

  5. Validation of a Solid Rocket Motor Internal Environment Model

    NASA Technical Reports Server (NTRS)

    Martin, Heath T.

    2017-01-01

    In a prior effort, a thermal/fluid model of the interior of Penn State University's laboratory-scale Insulation Test Motor (ITM) was constructed to predict both the convective and radiative heat transfer to the interior walls of the ITM with a minimum of empiricism. These predictions were then compared to values of total and radiative heat flux measured in a previous series of ITM test firings to assess the capabilities and shortcomings of the chosen modeling approach. Though the calculated fluxes reasonably agreed with those measured during testing, this exercise revealed means of improving the fidelity of the model to, in the case of the thermal radiation, enable direct comparison of the measured and calculated fluxes and, for the total heat flux, compute a value indicative of the average measured condition. By replacing the P1-Approximation with the discrete ordinates (DO) model for the solution of the gray radiative transfer equation, the radiation intensity field in the optically thin region near the radiometer is accurately estimated, allowing the thermal radiation flux to be calculated on the heat-flux sensor itself, which was then compared directly to the measured values. Though the fully coupling the wall thermal response with the flow model was not attempted due to the excessive computational time required, a separate wall thermal response model was used to better estimate the average temperature of the graphite surfaces upstream of the heat flux gauges and improve the accuracy of both the total and radiative heat flux computations. The success of this modeling approach increases confidence in the ability of state-of-the-art thermal and fluid modeling to accurately predict SRM internal environments, offers corrections to older methods, and supplies a tool for further studies of the dynamics of SRM interiors.

  6. An oculomotor and computational study of a patient with diagonistic dyspraxia.

    PubMed

    Pouget, Pierre; Pradat-Diehl, Pascale; Rivaud-Péchoux, Sophie; Wattiez, Nicolas; Gaymard, Bertrand

    2011-04-01

    Diagonistic dyspraxia (DD) is a behavioural disorder encountered in split-brain subjects in which the left arm acts against the subject's will, deliberately counteracting what the right arm does. We report here an oculomotor and computational study of a patient with a long lasting form of DD. A first series of oculomotor paradigms revealed marked and unprecedented saccade impairments. We used a computational model in order to provide information about the impaired decision-making process: the analysis of saccade latencies revealed that variations of decision times were explained by adjustments of response criterion. This result and paradoxical impairments observed in additional oculomotor paradigms allowed to propose that this adjustment of the criterion level resulted from the co-existence of counteracting oculomotor programs, consistent with the existence of antagonist programs in homotopic cortical areas. In the intact brain, trans-hemispheric inhibition would allow suppression of these counter programs. Depending on the topography of the disconnected areas, various motor and/or behavioural impairments would arise in split-brain subjects. In motor systems, such conflict would result in increased criteria for desired movement execution (oculomotor system) or in simultaneous execution of counteracting movements (skeletal motor system). At higher cognitive levels, it may result in conflict of intentions. Copyright © 2010 Elsevier Srl. All rights reserved.

  7. Irreversible electroporation of the pancreas is feasible and safe in a porcine survival model.

    PubMed

    Fritz, Stefan; Sommer, Christof M; Vollherbst, Dominik; Wachter, Miguel F; Longerich, Thomas; Sachsenmeier, Milena; Knapp, Jürgen; Radeleff, Boris A; Werner, Jens

    2015-07-01

    Use of thermal tumor ablation in the pancreatic parenchyma is limited because of the risk of pancreatitis, pancreatic fistula, or hemorrhage. This study aimed to evaluate the feasibility and safety of irreversible electroporation (IRE) in a porcine model. Ten pigs were divided into 2 study groups. In the first group, animals received IRE of the pancreatic tail and were killed after 60 minutes. In the second group, animals received IRE at the head of the pancreas and were followed up for 7 days. Clinical parameters, computed tomography imaging, laboratory results, and histology were obtained. All animals survived IRE ablation, and no cardiac adverse effects were noted. Sixty minutes after IRE, a hypodense lesion on computed tomography imaging indicated the ablation zone. None of the animals developed clinical signs of acute pancreatitis. Only small amounts of ascites fluid, with a transient increase in amylase and lipase levels, were observed, indicating that no pancreatic fistula occurred. This porcine model shows that IRE is feasible and safe in the pancreatic parenchyma. Computed tomography imaging reveals significant changes at 60 minutes after IRE and therefore might serve as an early indicator of therapeutic success. Clinical studies are needed to evaluate the efficacy of IRE in pancreatic cancer.

  8. Inference of dust opacities for the 1977 Martian great dust storms from Viking Lander 1 pressure data

    NASA Technical Reports Server (NTRS)

    Zurek, R. W.

    1981-01-01

    The tidal heating components for the dusty Martian atmosphere are computed based on dust optical parameters estimated from Viking Lander imaging data, and used to compute the variation of the tidal surface pressure components at the Viking Lander sites as a function of season and the total vertical extinction optical depth of the atmosphere. An atmospheric tidal model is used which is based on the inviscid, hydrostatic primitive equations linearized about a motionless basic state the temperature of which varies only with height, and the profiles of the tidal forcing components are computed using a delta-Eddington approximation to the radiative transfer equations. Comparison of the model results with the observed variations of surface pressure and overhead dust opacity at the Viking Lander 1 site reveal that the dust opacities and optical parameters derived from imaging data are roughly representative of the global dust haze necessary to reproduce the observed surface pressure amplitudes, with the exception of the model-inferred asymmetry parameter, which is smaller during the onset of a great storm. The observed preferential enhancement of the semidiurnal tide with respect to the diurnal tide during dust storm onset is shown to be due primarily to the elevation of the tidal heating source in a very dusty atmosphere.

  9. Comparative structural and computational analysis supports eighteen cellulose synthases in the plant cellulose synthesis complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek

    A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individualmore » lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In conclusion, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains.« less

  10. Comparative structural and computational analysis supports eighteen cellulose synthases in the plant cellulose synthesis complex

    DOE PAGES

    Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek; ...

    2016-06-27

    A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individualmore » lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In conclusion, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains.« less

  11. Comparative Structural and Computational Analysis Supports Eighteen Cellulose Synthases in the Plant Cellulose Synthesis Complex

    PubMed Central

    Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek; Du, Juan; Davis, Jonathan K.; Lee, Jung-Goo; Slabaugh, Erin; Vandavasi, Venu Gopal; O’Neill, Hugh; Roberts, Eric M.; Roberts, Alison W.; Yingling, Yaroslava G.; Haigler, Candace H.

    2016-01-01

    A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individual lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In summary, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains. PMID:27345599

  12. Bayesian spatial transformation models with applications in neuroimaging data

    PubMed Central

    Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.

    2013-01-01

    Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143

  13. A study of structural properties of gene network graphs for mathematical modeling of integrated mosaic gene networks.

    PubMed

    Petrovskaya, Olga V; Petrovskiy, Evgeny D; Lavrik, Inna N; Ivanisenko, Vladimir A

    2017-04-01

    Gene network modeling is one of the widely used approaches in systems biology. It allows for the study of complex genetic systems function, including so-called mosaic gene networks, which consist of functionally interacting subnetworks. We conducted a study of a mosaic gene networks modeling method based on integration of models of gene subnetworks by linear control functionals. An automatic modeling of 10,000 synthetic mosaic gene regulatory networks was carried out using computer experiments on gene knockdowns/knockouts. Structural analysis of graphs of generated mosaic gene regulatory networks has revealed that the most important factor for building accurate integrated mathematical models, among those analyzed in the study, is data on expression of genes corresponding to the vertices with high properties of centrality.

  14. Application of a computational decision model to examine acute drug effects on human risk taking.

    PubMed

    Lane, Scott D; Yechiam, Eldad; Busemeyer, Jerome R

    2006-05-01

    In 3 previous experiments, high doses of alcohol, marijuana, and alprazolam acutely increased risky decision making by adult humans in a 2-choice (risky vs. nonrisky) laboratory task. In this study, a computational modeling analysis known as the expectancy valence model (J. R. Busemeyer & J. C. Stout, 2002) was applied to individual-participant data from these studies, for the highest administered dose of all 3 drugs and corresponding placebo doses, to determine changes in decision-making processes that may be uniquely engendered by each drug. The model includes 3 parameters: responsiveness to rewards and losses (valence or motivation); the rate of updating expectancies about the value of risky alternatives (learning/memory); and the consistency with which trial-by-trial choices match expected outcomes (sensitivity). Parameter estimates revealed 3 key outcomes: Alcohol increased responsiveness to risky rewards and decreased responsiveness to risky losses (motivation) but did not alter expectancy updating (learning/memory); both marijuana and alprazolam produced increases in risk taking that were related to learning/memory but not motivation; and alcohol and marijuana (but not alprazolam) produced more random response patterns that were less consistently related to expected outcomes on the 2 choices. No significant main effects of gender or dose by gender interactions were obtained, but 2 dose by gender interactions approached significance. These outcomes underscore the utility of using a computational modeling approach to deconstruct decision-making processes and thus better understand drug effects on risky decision making in humans.

  15. Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.

    PubMed

    Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin

    2015-08-21

    Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.

  16. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans.

    PubMed

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-04-26

    Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.

  17. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans

    PubMed Central

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-01-01

    Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches. PMID:27113635

  18. Supporting BPMN choreography with system integration artefacts for enterprise process collaboration

    NASA Astrophysics Data System (ADS)

    Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2014-07-01

    Business Process Model and Notation (BPMN) choreography modelling depicts externally visible message exchanges between collaborating processes of enterprise information systems. Implementation of choreography relies on designing system integration solutions to realise message exchanges between independently developed systems. Enterprise integration patterns (EIPs) are widely accepted artefacts to design integration solutions. If the choreography model represents coordination requirements between processes with behaviour mismatches, the integration designer needs to analyse the routing requirements and address these requirements by manually designing EIP message routers. As collaboration scales and complexity increases, manual design becomes inefficient. Thus, the research problem of this paper is to explore a method to automatically identify routing requirements from BPMN choreography model and to accordingly design routing in the integration solution. To achieve this goal, recurring behaviour mismatch scenarios are analysed as patterns, and corresponding solutions are proposed as EIP routers. Using this method, a choreography model can be analysed by computer to identify occurrences of mismatch patterns, leading to corresponding router selection. A case study demonstrates that the proposed method enables computer-assisted integration design to implement choreography. A further experiment reveals that the method is effective to improve the design quality and reduce time cost.

  19. NMR Crystallography of Enzyme Active Sites: Probing Chemically-Detailed, Three-Dimensional Structure in Tryptophan Synthase

    PubMed Central

    Dunn, Michael F.

    2013-01-01

    Conspectus NMR crystallography – the synergistic combination of X-ray diffraction, solid-state NMR spectroscopy, and computational chemistry – offers unprecedented insight into three-dimensional, chemically-detailed structure. From its initial role in refining diffraction data of organic and inorganic solids, NMR crystallography is now being developed for application to active sites in biomolecules, where it reveals chemically-rich detail concerning the interactions between enzyme site residues and the reacting substrate that is not achievable when X-ray, NMR, or computational methodologies are applied in isolation. For example, typical X-ray crystal structures (1.5 to 2.5 Å resolution) of enzyme-bound intermediates identify possible hydrogen-bonding interactions between site residues and substrate, but do not directly identify the protonation state of either. Solid-state NMR can provide chemical shifts for selected atoms of enzyme-substrate complexes, but without a larger structural framework in which to interpret them, only empirical correlations with local chemical structure are possible. Ab initio calculations and molecular mechanics can build models for enzymatic processes, but rely on chemical details that must be specified. Together, however, X-ray diffraction, solid-state NMR spectroscopy, and computational chemistry can provide consistent and testable models for structure and function of enzyme active sites: X-ray crystallography provides a coarse framework upon which models of the active site can be developed using computational chemistry; these models can be distinguished by comparison of their calculated NMR chemical shifts with the results of solid-state NMR spectroscopy experiments. Conceptually, each technique is a puzzle piece offering a generous view of the big picture. Only when correctly pieced together, however, can they reveal the big picture at highest resolution. In this Account, we detail our first steps in the development of NMR crystallography for application to enzyme catalysis. We begin with a brief introduction to NMR crystallography and then define the process that we have employed to probe the active site in the β-subunit of tryptophan synthase with unprecedented atomic-level resolution. This approach has resulted in a novel structural hypothesis for the protonation state of the quinonoid intermediate in tryptophan synthase and its surprising role in directing the next step in the catalysis of L-Trp formation. PMID:23537227

  20. Computational prediction of formulation strategies for beyond-rule-of-5 compounds.

    PubMed

    Bergström, Christel A S; Charman, William N; Porter, Christopher J H

    2016-06-01

    The physicochemical properties of some contemporary drug candidates are moving towards higher molecular weight, and coincidentally also higher lipophilicity in the quest for biological selectivity and specificity. These physicochemical properties move the compounds towards beyond rule-of-5 (B-r-o-5) chemical space and often result in lower water solubility. For such B-r-o-5 compounds non-traditional delivery strategies (i.e. those other than conventional tablet and capsule formulations) typically are required to achieve adequate exposure after oral administration. In this review, we present the current status of computational tools for prediction of intestinal drug absorption, models for prediction of the most suitable formulation strategies for B-r-o-5 compounds and models to obtain an enhanced understanding of the interplay between drug, formulation and physiological environment. In silico models are able to identify the likely molecular basis for low solubility in physiologically relevant fluids such as gastric and intestinal fluids. With this baseline information, a formulation scientist can, at an early stage, evaluate different orally administered, enabling formulation strategies. Recent computational models have emerged that predict glass-forming ability and crystallisation tendency and therefore the potential utility of amorphous solid dispersion formulations. Further, computational models of loading capacity in lipids, and therefore the potential for formulation as a lipid-based formulation, are now available. Whilst such tools are useful for rapid identification of suitable formulation strategies, they do not reveal drug localisation and molecular interaction patterns between drug and excipients. For the latter, Molecular Dynamics simulations provide an insight into the interplay between drug, formulation and intestinal fluid. These different computational approaches are reviewed. Additionally, we analyse the molecular requirements of different targets, since these can provide an early signal that enabling formulation strategies will be required. Based on the analysis we conclude that computational biopharmaceutical profiling can be used to identify where non-conventional gateways, such as prediction of 'formulate-ability' during lead optimisation and early development stages, are important and may ultimately increase the number of orally tractable contemporary targets. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  1. A neurally plausible parallel distributed processing model of event-related potential word reading data.

    PubMed

    Laszlo, Sarah; Plaut, David C

    2012-03-01

    The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Computational Aerodynamic Simulations of a 1484 ft/sec Tip Speed Quiet High-Speed Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.

  3. A signaling visualization toolkit to support rational design of combination therapies and biomarker discovery: SiViT.

    PubMed

    Bown, James L; Shovman, Mark; Robertson, Paul; Boiko, Andrei; Goltsov, Alexey; Mullen, Peter; Harrison, David J

    2017-05-02

    Targeted cancer therapy aims to disrupt aberrant cellular signalling pathways. Biomarkers are surrogates of pathway state, but there is limited success in translating candidate biomarkers to clinical practice due to the intrinsic complexity of pathway networks. Systems biology approaches afford better understanding of complex, dynamical interactions in signalling pathways targeted by anticancer drugs. However, adoption of dynamical modelling by clinicians and biologists is impeded by model inaccessibility. Drawing on computer games technology, we present a novel visualization toolkit, SiViT, that converts systems biology models of cancer cell signalling into interactive simulations that can be used without specialist computational expertise. SiViT allows clinicians and biologists to directly introduce for example loss of function mutations and specific inhibitors. SiViT animates the effects of these introductions on pathway dynamics, suggesting further experiments and assessing candidate biomarker effectiveness. In a systems biology model of Her2 signalling we experimentally validated predictions using SiViT, revealing the dynamics of biomarkers of drug resistance and highlighting the role of pathway crosstalk. No model is ever complete: the iteration of real data and simulation facilitates continued evolution of more accurate, useful models. SiViT will make accessible libraries of models to support preclinical research, combinatorial strategy design and biomarker discovery.

  4. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  5. The computational modeling of supercritical carbon dioxide flow in solid wood material

    NASA Astrophysics Data System (ADS)

    Gething, Brad Allen

    The use of supercritical carbon dioxide (SC CO2) as a solvent to deliver chemicals to porous media has shown promise in various industries. Recently, efforts by the wood treating industry have been made to use SC CO 2 as a replacement to more traditional methods of chemical preservative delivery. Previous studies have shown that the SC CO2 pressure treatment process is capable of impregnating solid wood materials with chemical preservatives, but concentration gradients of preservative often develop during treatment. Widespread application of the treatment process is unlikely unless the treatment inconsistencies can be improved for greater overall treating homogeneity. The development of a computational flow model to accurately predict the internal pressure of CO2 during treatment is integral to a more consistent treatment process. While similar models that attempt to describe the flow process have been proposed by Ward (1989) and Sahle-Demessie (1994), neither have been evaluated for accuracy. The present study was an evaluation of those models. More specifically, the present study evaluated the performance of a computational flow model, which was based on the viscous flow of compressible CO2 as a single phase through a porous medium at the macroscopic scale. Flow model performance was evaluated through comparisons between predicted pressures that corresponded to internal pressure development measured with inserted sensor probes during treatment of specimens. Pressure measurements were applied through a technique developed by Schneider (2000), which utilizes epoxy-sealed stainless steel tubes that are inserted into the wood as pressure probes. Two different wood species were investigated as treating specimens, Douglas-fir and shortleaf pine. Evaluations of the computational flow model revealed that it is sensitive to input parameters that relate to both processing conditions and material properties, particularly treating temperature and wood permeability, respectively. This sensitivity requires that the input parameters, principally permeability, be relatively accurate to evaluate the appropriateness of the phenomenological relationships of the computational flow model. Providing this stipulation, it was observed that below the region of transition from CO2 gas to supercritical fluid, the computational flow model has the potential to predict flow accurately. However, above the transition region, the model does not fully account for the physics of the flow process, resulting in prediction inaccuracy. One potential cause for the loss of prediction accuracy in the supercritical region was attributed to a dynamic change in permeability that is likely caused by an interaction between the flowing SC CO2 and the wood material. Furthermore, a hysteresis was observed between the pressurization and depressurization stages of treatment, which cannot be explained by the current flow model. If greater accuracy in the computational flow model is desired, a more complex approach to the model is necessary, which would include non-constant input parameters of temperature and permeability. Furthermore, the implications of a multi-scale methodology for the flow model were explored from a qualitative standpoint.

  6. Computed Tomography Perfusion, Magnetic Resonance Imaging, and Histopathological Findings After Laparoscopic Renal Cryoablation: An In Vivo Pig Model.

    PubMed

    Nielsen, Tommy Kjærgaard; Østraat, Øyvind; Graumann, Ole; Pedersen, Bodil Ginnerup; Andersen, Gratien; Høyer, Søren; Borre, Michael

    2017-08-01

    The present study investigates how computed tomography perfusion scans and magnetic resonance imaging correlates with the histopathological alterations in renal tissue after cryoablation. A total of 15 pigs were subjected to laparoscopic-assisted cryoablation on both kidneys. After intervention, each animal was randomized to a postoperative follow-up period of 1, 2, or 4 weeks, after which computed tomography perfusion and magnetic resonance imaging scans were performed. Immediately after imaging, open bilateral nephrectomy was performed allowing for histopathological examination of the cryolesions. On computed tomography perfusion and magnetic resonance imaging examinations, rim enhancement was observed in the transition zone of the cryolesion 1week after laparoscopic-assisted cryoablation. This rim enhancement was found to subside after 2 and 4 weeks of follow-up, which was consistent with the microscopic examinations revealing of fibrotic scar tissue formation in the peripheral zone of the cryolesion. On T2 magnetic resonance imaging sequences, a thin hypointense rim surrounded the cryolesion, separating it from the adjacent renal parenchyma. Microscopic examinations revealed hemorrhage and later hemosiderin located in the peripheral zone. No nodular or diffuse contrast enhancement was found in the central zone of the cryolesions at any follow-up stage on neither computed tomography perfusion nor magnetic resonance imaging. On microscopic examinations, the central zone was found to consist of coagulative necrosis 1 week after laparoscopic-assisted cryoablation, which was partially replaced by fibrotic scar tissue 4 weeks following laparoscopic-assisted cryoablation. Both computed tomography perfusion and magnetic resonance imaging found the renal collecting system to be involved at all 3 stages of follow-up, but on microscopic examination, the urothelium was found to be intact in all cases. In conclusion, cryoablation effectively destroyed renal parenchyma, leaving the urothelium intact. Both computed tomography perfusion and magnetic resonance imaging reflect the microscopic findings but with some differences, especially regarding the peripheral zone. Magnetic resonance imaging seems an attractive modality for early postoperative follow-up.

  7. Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model

    NASA Astrophysics Data System (ADS)

    Hazan, Aurélien

    2017-05-01

    We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.

  8. Preliminary description of the area navigation software for a microcomputer-based Loran-C receiver

    NASA Technical Reports Server (NTRS)

    Oguri, F.

    1983-01-01

    The development of new software implementation of this software on a microcomputer (MOS 6502) to provide high quality navigation information is described. This software development provides Area/Route Navigation (RNAV) information from Time Differences (TDs) in raw form using an elliptical Earth model and a spherical model. The software is prepared for the microcomputer based Loran-C receiver. To compute navigation infomation, a (MOS 6502) microcomputer and a mathematical chip (AM 9511A) were combined with the Loran-C receiver. Final data reveals that this software does indeed provide accurate information with reasonable execution times.

  9. Paradox of integration-A computational model

    NASA Astrophysics Data System (ADS)

    Krawczyk, Małgorzata J.; Kułakowski, Krzysztof

    2017-02-01

    The paradoxical aspect of integration of a social group has been highlighted by Blau (1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.

  10. Letting the ‘cat’ out of the bag: pouch young development of the extinct Tasmanian tiger revealed by X-ray computed tomography

    PubMed Central

    Spoutil, Frantisek; Prochazka, Jan; Black, Jay R.; Medlock, Kathryn; Paddle, Robert N.; Knitlova, Marketa; Hipsley, Christy A.

    2018-01-01

    The Tasmanian tiger or thylacine (Thylacinus cynocephalus) was an iconic Australian marsupial predator that was hunted to extinction in the early 1900s. Despite sharing striking similarities with canids, they failed to evolve many of the specialized anatomical features that characterize carnivorous placental mammals. These evolutionary limitations are thought to arise from functional constraints associated with the marsupial mode of reproduction, in which otherwise highly altricial young use their well-developed forelimbs to climb to the pouch and mouth to suckle. Here we present the first three-dimensional digital developmental series of the thylacine throughout its pouch life using X-ray computed tomography on all known ethanol-preserved specimens. Based on detailed skeletal measurements, we refine the species growth curve to improve age estimates for the individuals. Comparison of allometric growth trends in the appendicular skeleton (fore- and hindlimbs) with that of other placental and marsupial mammals revealed that despite their unique adult morphologies, thylacines retained a generalized early marsupial ontogeny. Our approach also revealed mislabelled specimens that possessed large epipubic bones (vestigial in thylacine) and differing vertebral numbers. All of our generated CT models are publicly available, preserving their developmental morphology and providing a novel digital resource for future studies of this unique marsupial. PMID:29515893

  11. Assessing Coupled Social Ecological Flood Vulnerability from Uttarakhand, India, to the State of New York with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Tellman, B.; Schwarz, B.

    2014-12-01

    This talk describes the development of a web application to predict and communicate vulnerability to floods given publicly available data, disaster science, and geotech cloud capabilities. The proof of concept in Google Earth Engine API with initial testing on case studies in New York and Utterakhand India demonstrates the potential of highly parallelized cloud computing to model socio-ecological disaster vulnerability at high spatial and temporal resolution and in near real time. Cloud computing facilitates statistical modeling with variables derived from large public social and ecological data sets, including census data, nighttime lights (NTL), and World Pop to derive social parameters together with elevation, satellite imagery, rainfall, and observed flood data from Dartmouth Flood Observatory to derive biophysical parameters. While more traditional, physically based hydrological models that rely on flow algorithms and numerical methods are currently unavailable in parallelized computing platforms like Google Earth Engine, there is high potential to explore "data driven" modeling that trades physics for statistics in a parallelized environment. A data driven approach to flood modeling with geographically weighted logistic regression has been initially tested on Hurricane Irene in southeastern New York. Comparison of model results with observed flood data reveals a 97% accuracy of the model to predict flooded pixels. Testing on multiple storms is required to further validate this initial promising approach. A statistical social-ecological flood model that could produce rapid vulnerability assessments to predict who might require immediate evacuation and where could serve as an early warning. This type of early warning system would be especially relevant in data poor places lacking the computing power, high resolution data such as LiDar and stream gauges, or hydrologic expertise to run physically based models in real time. As the data-driven model presented relies on globally available data, the only real time data input required would be typical data from a weather service, e.g. precipitation or coarse resolution flood prediction. However, model uncertainty will vary locally depending upon the resolution and frequency of observed flood and socio-economic damage impact data.

  12. Accuracy of RGD approximation for computing light scattering properties of diffusing and motile bacteria. [Rayleigh-Gans-Debye

    NASA Technical Reports Server (NTRS)

    Kottarchyk, M.; Chen, S.-H.; Asano, S.

    1979-01-01

    The study tests the accuracy of the Rayleigh-Gans-Debye (RGD) approximation against a rigorous scattering theory calculation for a simplified model of E. coli (about 1 micron in size) - a solid spheroid. A general procedure is formulated whereby the scattered field amplitude correlation function, for both polarized and depolarized contributions, can be computed for a collection of particles. An explicit formula is presented for the scattered intensity, both polarized and depolarized, for a collection of randomly diffusing or moving particles. Two specific cases for the intermediate scattering functions are considered: diffusing particles and freely moving particles with a Maxwellian speed distribution. The formalism is applied to microorganisms suspended in a liquid medium. Sensitivity studies revealed that for values of the relative index of refraction greater than 1.03, RGD could be in serious error in computing the intensity as well as correlation functions.

  13. Contextual modulation of value signals in reward and punishment learning

    PubMed Central

    Palminteri, Stefano; Khamassi, Mehdi; Joffily, Mateus; Coricelli, Giorgio

    2015-01-01

    Compared with reward seeking, punishment avoidance learning is less clearly understood at both the computational and neurobiological levels. Here we demonstrate, using computational modelling and fMRI in humans, that learning option values in a relative—context-dependent—scale offers a simple computational solution for avoidance learning. The context (or state) value sets the reference point to which an outcome should be compared before updating the option value. Consequently, in contexts with an overall negative expected value, successful punishment avoidance acquires a positive value, thus reinforcing the response. As revealed by post-learning assessment of options values, contextual influences are enhanced when subjects are informed about the result of the forgone alternative (counterfactual information). This is mirrored at the neural level by a shift in negative outcome encoding from the anterior insula to the ventral striatum, suggesting that value contextualization also limits the need to mobilize an opponent punishment learning system. PMID:26302782

  14. GPU accelerated implementation of NCI calculations using promolecular density.

    PubMed

    Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric

    2017-05-30

    The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Contextual modulation of value signals in reward and punishment learning.

    PubMed

    Palminteri, Stefano; Khamassi, Mehdi; Joffily, Mateus; Coricelli, Giorgio

    2015-08-25

    Compared with reward seeking, punishment avoidance learning is less clearly understood at both the computational and neurobiological levels. Here we demonstrate, using computational modelling and fMRI in humans, that learning option values in a relative--context-dependent--scale offers a simple computational solution for avoidance learning. The context (or state) value sets the reference point to which an outcome should be compared before updating the option value. Consequently, in contexts with an overall negative expected value, successful punishment avoidance acquires a positive value, thus reinforcing the response. As revealed by post-learning assessment of options values, contextual influences are enhanced when subjects are informed about the result of the forgone alternative (counterfactual information). This is mirrored at the neural level by a shift in negative outcome encoding from the anterior insula to the ventral striatum, suggesting that value contextualization also limits the need to mobilize an opponent punishment learning system.

  16. Ab Initio Structural Modeling of and Experimental Validation for Chlamydia trachomatis Protein CT296 Reveal Structural Similarity to Fe(II) 2-Oxoglutarate-Dependent Enzymes▿

    PubMed Central

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott

    2011-01-01

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-Å Cα root mean square deviation [RMSD]) the high-resolution (1.8-Å) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur. PMID:21965559

  17. Ab initio structural modeling of and experimental validation for Chlamydia trachomatis protein CT296 reveal structural similarity to Fe(II) 2-oxoglutarate-dependent enzymes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott

    2012-02-13

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF)more » CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-{angstrom} C{alpha} root mean square deviation [RMSD]) the high-resolution (1.8-{angstrom}) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur.« less

  18. Biomes computed from simulated climatologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claussen, M.; Esch, M.

    1994-01-01

    The biome model of Prentice et al. is used to predict global patterns of potential natural plant formations, or biomes, from climatologies simulated by ECHAM, a model used for climate simulations at the Max-Planck-Institut fuer Meteorologie. This study undertaken in order to show the advantage of this biome model in diagnosing the performance of a climate model and assessing effects of past and future climate changes predicted by a climate model. Good overall agreement is found between global patterns of biomes computed from observed and simulated data of present climate. But there are also major discrepancies indicated by a differencemore » in biomes in Australia, in the Kalahari Desert, and in the Middle West of North America. These discrepancies can be traced back to in simulated rainfall as well as summer or winter temperatures. Global patterns of biomes computed from an ice age simulation reveal that North America, Europe, and Siberia should have been covered largely by tundra and taiga, whereas only small differences are for the tropical rain forests. A potential northeast shift of biomes is expected from a simulation with enhanced CO{sub 2} concentration according to the IPCC Scenario A. Little change is seen in the tropical rain forest and the Sahara. Since the biome model used is not capable of predicting chances in vegetation patterns due to a rapid climate change, the latter simulation to be taken as a prediction of chances in conditions favourable for the existence of certain biomes, not as a reduction of a future distribution of biomes. 15 refs., 8 figs., 2 tabs.« less

  19. Physical and numerical modeling of hydrophysical proceses on the site of underwater pipelines

    NASA Astrophysics Data System (ADS)

    Garmakova, M. E.; Degtyarev, V. V.; Fedorova, N. N.; Shlychkov, V. A.

    2018-03-01

    The paper outlines issues related to ensuring the exploitation safety of underwater pipelines that are at risk of accidents. The performed research is based on physical and mathematical modeling of local bottom erosion in the area of pipeline location. The experimental studies were performed on the basis of the Hydraulics Laboratory of the Department of Hydraulic Engineering Construction, Safety and Ecology of NSUACE (Sibstrin). In the course of physical experiments it was revealed that the intensity of the bottom soil reforming depends on the deepening of the pipeline. The ANSYS software has been used for numerical modeling. The process of erosion of the sandy bottom was modeled under the pipeline. Comparison of computational results at various mass flow rates was made.

  20. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

    PubMed Central

    Scellier, Benjamin; Bengio, Yoshua

    2017-01-01

    We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task. PMID:28522969

  1. Evolutionary versatility of eukaryotic protein domains revealed by their bigram networks

    PubMed Central

    2011-01-01

    Background Protein domains are globular structures of independently folded polypeptides that exert catalytic or binding activities. Their sequences are recognized as evolutionary units that, through genome recombination, constitute protein repertoires of linkage patterns. Via mutations, domains acquire modified functions that contribute to the fitness of cells and organisms. Recent studies have addressed the evolutionary selection that may have shaped the functions of individual domains and the emergence of particular domain combinations, which led to new cellular functions in multi-cellular animals. This study focuses on modeling domain linkage globally and investigates evolutionary implications that may be revealed by novel computational analysis. Results A survey of 77 completely sequenced eukaryotic genomes implies a potential hierarchical and modular organization of biological functions in most living organisms. Domains in a genome or multiple genomes are modeled as a network of hetero-duplex covalent linkages, termed bigrams. A novel computational technique is introduced to decompose such networks, whereby the notion of domain "networking versatility" is derived and measured. The most and least "versatile" domains (termed "core domains" and "peripheral domains" respectively) are examined both computationally via sequence conservation measures and experimentally using selected domains. Our study suggests that such a versatility measure extracted from the bigram networks correlates with the adaptivity of domains during evolution, where the network core domains are highly adaptive, significantly contrasting the network peripheral domains. Conclusions Domain recombination has played a major part in the evolution of eukaryotes attributing to genome complexity. From a system point of view, as the results of selection and constant refinement, networks of domain linkage are structured in a hierarchical modular fashion. Domains with high degree of networking versatility appear to be evolutionary adaptive, potentially through functional innovations. Domain bigram networks are informative as a model of biological functions. The networking versatility indices extracted from such networks for individual domains reflect the strength of evolutionary selection that the domains have experienced. PMID:21849086

  2. Evolutionary versatility of eukaryotic protein domains revealed by their bigram networks.

    PubMed

    Xie, Xueying; Jin, Jing; Mao, Yongyi

    2011-08-18

    Protein domains are globular structures of independently folded polypeptides that exert catalytic or binding activities. Their sequences are recognized as evolutionary units that, through genome recombination, constitute protein repertoires of linkage patterns. Via mutations, domains acquire modified functions that contribute to the fitness of cells and organisms. Recent studies have addressed the evolutionary selection that may have shaped the functions of individual domains and the emergence of particular domain combinations, which led to new cellular functions in multi-cellular animals. This study focuses on modeling domain linkage globally and investigates evolutionary implications that may be revealed by novel computational analysis. A survey of 77 completely sequenced eukaryotic genomes implies a potential hierarchical and modular organization of biological functions in most living organisms. Domains in a genome or multiple genomes are modeled as a network of hetero-duplex covalent linkages, termed bigrams. A novel computational technique is introduced to decompose such networks, whereby the notion of domain "networking versatility" is derived and measured. The most and least "versatile" domains (termed "core domains" and "peripheral domains" respectively) are examined both computationally via sequence conservation measures and experimentally using selected domains. Our study suggests that such a versatility measure extracted from the bigram networks correlates with the adaptivity of domains during evolution, where the network core domains are highly adaptive, significantly contrasting the network peripheral domains. Domain recombination has played a major part in the evolution of eukaryotes attributing to genome complexity. From a system point of view, as the results of selection and constant refinement, networks of domain linkage are structured in a hierarchical modular fashion. Domains with high degree of networking versatility appear to be evolutionary adaptive, potentially through functional innovations. Domain bigram networks are informative as a model of biological functions. The networking versatility indices extracted from such networks for individual domains reflect the strength of evolutionary selection that the domains have experienced.

  3. The activation strain model and molecular orbital theory

    PubMed Central

    Wolters, Lando P; Bickelhaupt, F Matthias

    2015-01-01

    The activation strain model is a powerful tool for understanding reactivity, or inertness, of molecular species. This is done by relating the relative energy of a molecular complex along the reaction energy profile to the structural rigidity of the reactants and the strength of their mutual interactions: ΔE(ζ) = ΔEstrain(ζ) + ΔEint(ζ). We provide a detailed discussion of the model, and elaborate on its strong connection with molecular orbital theory. Using these approaches, a causal relationship is revealed between the properties of the reactants and their reactivity, e.g., reaction barriers and plausible reaction mechanisms. This methodology may reveal intriguing parallels between completely different types of chemical transformations. Thus, the activation strain model constitutes a unifying framework that furthers the development of cross-disciplinary concepts throughout various fields of chemistry. We illustrate the activation strain model in action with selected examples from literature. These examples demonstrate how the methodology is applied to different research questions, how results are interpreted, and how insights into one chemical phenomenon can lead to an improved understanding of another, seemingly completely different chemical process. WIREs Comput Mol Sci 2015, 5:324–343. doi: 10.1002/wcms.1221 PMID:26753009

  4. Dynamics of Compressible Convection and Thermochemical Mantle Convection

    NASA Astrophysics Data System (ADS)

    Liu, Xi

    The Earth's long-wavelength geoid anomalies have long been used to constrain the dynamics and viscosity structure of the mantle in an isochemical, whole-mantle convection model. However, there is strong evidence that the seismically observed large low shear velocity provinces (LLSVPs) in the lowermost mantle are chemically distinct and denser than the ambient mantle. In this thesis, I investigated how chemically distinct and dense piles influence the geoid. I formulated dynamically self-consistent 3D spherical convection models with realistic mantle viscosity structure which reproduce Earth's dominantly spherical harmonic degree-2 convection. The models revealed a compensation effect of the chemically dense LLSVPs. Next, I formulated instantaneous flow models based on seismic tomography to compute the geoid and constrain mantle viscosity assuming thermochemical convection with the compensation effect. Thermochemical models reconcile the geoid observations. The viscosity structure inverted for thermochemical models is nearly identical to that of whole-mantle models, and both prefer weak transition zone. Our results have implications for mineral physics, seismic tomographic studies, and mantle convection modelling. Another part of this thesis describes analyses of the influence of mantle compressibility on thermal convection in an isoviscous and compressible fluid with infinite Prandtl number. A new formulation of the propagator matrix method is implemented to compute the critical Rayleigh number and the corresponding eigenfunctions for compressible convection. Heat flux and thermal boundary layer properties are quantified in numerical models and scaling laws are developed.

  5. Integration of Local Observations into the One Dimensional Fog Model PAFOG

    NASA Astrophysics Data System (ADS)

    Thoma, Christina; Schneider, Werner; Masbou, Matthieu; Bott, Andreas

    2012-05-01

    The numerical prediction of fog requires a very high vertical resolution of the atmosphere. Owing to a prohibitive computational effort of high resolution three dimensional models, operational fog forecast is usually done by means of one dimensional fog models. An important condition for a successful fog forecast with one dimensional models consists of the proper integration of observational data into the numerical simulations. The goal of the present study is to introduce new methods for the consideration of these data in the one dimensional radiation fog model PAFOG. First, it will be shown how PAFOG may be initialized with observed visibilities. Second, a nudging scheme will be presented for the inclusion of measured temperature and humidity profiles in the PAFOG simulations. The new features of PAFOG have been tested by comparing the model results with observations of the German Meteorological Service. A case study will be presented that reveals the importance of including local observations in the model calculations. Numerical results obtained with the modified PAFOG model show a distinct improvement of fog forecasts regarding the times of fog formation, dissipation as well as the vertical extent of the investigated fog events. However, model results also reveal that a further improvement of PAFOG might be possible if several empirical model parameters are optimized. This tuning can only be realized by comprehensive comparisons of model simulations with corresponding fog observations.

  6. Computational Approaches for Revealing the Structure of Membrane Transporters: Case Study on Bilitranslocase.

    PubMed

    Venko, Katja; Roy Choudhury, A; Novič, Marjana

    2017-01-01

    The structural and functional details of transmembrane proteins are vastly underexplored, mostly due to experimental difficulties regarding their solubility and stability. Currently, the majority of transmembrane protein structures are still unknown and this present a huge experimental and computational challenge. Nowadays, thanks to X-ray crystallography or NMR spectroscopy over 3000 structures of membrane proteins have been solved, among them only a few hundred unique ones. Due to the vast biological and pharmaceutical interest in the elucidation of the structure and the functional mechanisms of transmembrane proteins, several computational methods have been developed to overcome the experimental gap. If combined with experimental data the computational information enables rapid, low cost and successful predictions of the molecular structure of unsolved proteins. The reliability of the predictions depends on the availability and accuracy of experimental data associated with structural information. In this review, the following methods are proposed for in silico structure elucidation: sequence-dependent predictions of transmembrane regions, predictions of transmembrane helix-helix interactions, helix arrangements in membrane models, and testing their stability with molecular dynamics simulations. We also demonstrate the usage of the computational methods listed above by proposing a model for the molecular structure of the transmembrane protein bilitranslocase. Bilitranslocase is bilirubin membrane transporter, which shares similar tissue distribution and functional properties with some of the members of the Organic Anion Transporter family and is the only member classified in the Bilirubin Transporter Family. Regarding its unique properties, bilitranslocase is a potentially interesting drug target.

  7. Computational analysis of nonlinearities within dynamics of cable-based driving systems

    NASA Astrophysics Data System (ADS)

    Anghelache, G. D.; Nastac, S.

    2017-08-01

    This paper deals with computational nonlinear dynamics of mechanical systems containing some flexural parts within the actuating scheme, and, especially, the situations of the cable-based driving systems were treated. It was supposed both functional nonlinearities and the real characteristic of the power supply, in order to obtain a realistically computer simulation model being able to provide very feasible results regarding the system dynamics. It was taken into account the transitory and stable regimes during a regular exploitation cycle. The authors present a particular case of a lift system, supposed to be representatively for the objective of this study. The simulations were made based on the values of the essential parameters acquired from the experimental tests and/or the regular practice in the field. The results analysis and the final discussions reveal the correlated dynamic aspects within the mechanical parts, the driving system, and the power supply, whole of these supplying potential sources of particular resonances, within some transitory phases of the working cycle, and which can affect structural and functional dynamics. In addition, it was underlines the influences of computational hypotheses on the both quantitative and qualitative behaviour of the system. Obviously, the most significant consequence of this theoretical and computational research consist by developing an unitary and feasible model, useful to dignify the nonlinear dynamic effects into the systems with cable-based driving scheme, and hereby to help an optimization of the exploitation regime including a dynamics control measures.

  8. An Experimental Study of the Ground Transportation System (GTS) Model in the NASA Ames 7- by 10-Ft Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Storms, Bruce L.; Ross, James C.; Heineck, James T.; Walker, Stephen M.; Driver, David M.; Zilliac, Gregory G.; Bencze, Daniel P. (Technical Monitor)

    2001-01-01

    The 1/8-scale Ground Transportation System (GTS) model was studied experimentally in the NASA Ames 7- by 10-Ft Wind Tunnel. Designed for validation of computational fluid dynamics (CFD), the GTS model has a simplified geometry with a cab-over-engine design and no tractor-trailer gap. As a further simplification, all measurements of the GTS model were made without wheels. Aerodynamic boattail plates were also tested on the rear of the trailer to provide a simple geometry modification for computation. The experimental measurements include body-axis drag, surface pressures, surface hot-film anemometry, oil-film interferometry, and 3-D particle image velocimetry (PIV). The wind-averaged drag coefficient with and without boattail plates was 0.225 and 0.277, respectively. PIV measurements behind the model reveal a significant reduction in the wake size due to the flow turning provided by the boattail plates. Hot-film measurements on the side of the cab indicate laminar separation with turbulent reattachment within 0.08 trailer width for zero and +/- 10 degrees yaw. Oil film interferometry provided quantitative measurements of skin friction and qualitative oil flow images. A complete set of the experimental data and the surface definition of the model are included on a CD-ROM for further analysis and comparison.

  9. Characteristic analysis-1981: Final program and a possible discovery

    USGS Publications Warehouse

    McCammon, R.B.; Botbol, J.M.; Sinding-Larsen, R.; Bowen, R.W.

    1983-01-01

    The latest ornewest version of thecharacteristicanalysis (NCHARAN)computer program offers the exploration geologist a wide variety of options for integrating regionalized multivariate data. The options include the selection of regional cells for characterizing deposit models, the selection of variables that constitute the models, and the choice of logical combinations of variables that best represent these models. Moreover, the program provides for the display of results which, in turn, makes possible review, reselection, and refinement of a model. Most important, the performance of the above-mentioned steps in an interactive computing mode can result in a timely and meaningful interpretation of the data available to the exploration geologist. The most recent application of characteristic analysis has resulted in the possible discovery of economic sulfide mineralization in the Grong area in central Norway. Exploration data for 27 geophysical, geological, and geochemical variables were used to construct a mineralized and a lithogeochemical model for an area that contained a known massive sulfide deposit. The models were applied to exploration data collected from the Gjersvik area in the Grong mining district and resulted in the identification of two localities of possible mineralization. Detailed field examination revealed the presence of a sulfide vein system and a partially inverted stratigraphic sequence indicating the possible presence of a massive sulfide deposit at depth. ?? 1983 Plenum Publishing Corporation.

  10. Combining Experiments and Simulations Using the Maximum Entropy Principle

    PubMed Central

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124

  11. fMRI Analysis-by-Synthesis Reveals a Dorsal Hierarchy That Extracts Surface Slant.

    PubMed

    Ban, Hiroshi; Welchman, Andrew E

    2015-07-08

    The brain's skill in estimating the 3-D orientation of viewed surfaces supports a range of behaviors, from placing an object on a nearby table, to planning the best route when hill walking. This ability relies on integrating depth signals across extensive regions of space that exceed the receptive fields of early sensory neurons. Although hierarchical selection and pooling is central to understanding of the ventral visual pathway, the successive operations in the dorsal stream are poorly understood. Here we use computational modeling of human fMRI signals to probe the computations that extract 3-D surface orientation from binocular disparity. To understand how representations evolve across the hierarchy, we developed an inference approach using a series of generative models to explain the empirical fMRI data in different cortical areas. Specifically, we simulated the responses of candidate visual processing algorithms and tested how well they explained fMRI responses. Thereby we demonstrate a hierarchical refinement of visual representations moving from the representation of edges and figure-ground segmentation (V1, V2) to spatially extensive disparity gradients in V3A. We show that responses in V3A are little affected by low-level image covariates, and have a partial tolerance to the overall depth position. Finally, we show that responses in V3A parallel perceptual judgments of slant. This reveals a relatively short computational hierarchy that captures key information about the 3-D structure of nearby surfaces, and more generally demonstrates an analysis approach that may be of merit in a diverse range of brain imaging domains. Copyright © 2015 Ban and Welchman.

  12. Experimental and numerical characterisation of the elasto-plastic properties of bovine trabecular bone and a trabecular bone analogue.

    PubMed

    Kelly, Nicola; McGarry, J Patrick

    2012-05-01

    The inelastic pressure dependent compressive behaviour of bovine trabecular bone is investigated through experimental and computational analysis. Two loading configurations are implemented, uniaxial and confined compression, providing two distinct loading paths in the von Mises-pressure stress plane. Experimental results reveal distinctive yielding followed by a constant nominal stress plateau for both uniaxial and confined compression. Computational simulation of the experimental tests using the Drucker-Prager and Mohr-Coulomb plasticity models fails to capture the confined compression behaviour of trabecular bone. The high pressure developed during confined compression does not result in plastic deformation using these formulations, and a near elastic response is computed. In contrast, the crushable foam plasticity models provide accurate simulation of the confined compression tests, with distinctive yield and plateau behaviour being predicted. The elliptical yield surfaces of the crushable foam formulations in the von Mises-pressure stress plane accurately characterise the plastic behaviour of trabecular bone. Results reveal that the hydrostatic yield stress is equal to the uniaxial yield stress for trabecular bone, demonstrating the importance of accurate characterisation and simulation of the pressure dependent plasticity. It is also demonstrated in this study that a commercially available trabecular bone analogue material, cellular rigid polyurethane foam, exhibits similar pressure dependent yield behaviour, despite having a lower stiffness and strength than trabecular bone. This study provides a novel insight into the pressure dependent yield behaviour of trabecular bone, demonstrating the inadequacy of uniaxial testing alone. For the first time, crushable foam plasticity formulations are implemented for trabecular bone. The enhanced understanding of the inelastic behaviour of trabecular bone established in this study will allow for more realistic simulation of orthopaedic device implantation and failure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  14. Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.

    2006-01-01

    Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.

  15. Color opponent receptive fields self-organize in a biophysical model of visual cortex via spike-timing dependent plasticity

    PubMed Central

    Eguchi, Akihiro; Neymotin, Samuel A.; Stringer, Simon M.

    2014-01-01

    Although many computational models have been proposed to explain orientation maps in primary visual cortex (V1), it is not yet known how similar clusters of color-selective neurons in macaque V1/V2 are connected and develop. In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. Each color input is decomposed into a red, green, and blue representation and transmitted to the visual cortex via a simulated optic nerve in a luminance channel and red–green and blue–yellow opponent color channels. Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. Each neuron in the V1 output layer makes synaptic connections to neighboring neurons and receives the three types of signals in the different channels from the corresponding photoreceptor position. Synaptic weights are randomized and learned using spike-timing-dependent plasticity (STDP). After training with natural images, the neurons display heightened sensitivity to specific colors. Information-theoretic analysis reveals mutual information between particular stimuli and responses, and that the information reaches a maximum with fewer neurons in the higher layers, indicating that estimations of the input colors can be done using the output of fewer cells in the later stages of cortical processing. In addition, cells with similar color receptive fields form clusters. Analysis of spiking activity reveals increased firing synchrony between neurons when particular color inputs are presented or removed (ON-cell/OFF-cell). PMID:24659956

  16. Computational models of the Posner simple and choice reaction time tasks

    PubMed Central

    Feher da Silva, Carolina; Baldo, Marcus V. C.

    2015-01-01

    The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997

  17. A Three-Dimensional Kinematic and Kinetic Study of the College-Level Female Softball Swing

    PubMed Central

    Milanovich, Monica; Nesbit, Steven M.

    2014-01-01

    This paper quantifies and discusses the three-dimensional kinematic and kinetic characteristics of the female softball swing as performed by fourteen female collegiate amateur subjects. The analyses were performed using a three-dimensional computer model. The model was driven kinematically from subject swings data that were recorded with a multi-camera motion analysis system. Each subject used two distinct bats with significantly different inertial properties. Model output included bat trajectories, subject/bat interaction forces and torques, work, and power. These data formed the basis for a detailed analysis and description of fundamental swing kinematic and kinetic quantities. The analyses revealed that the softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. In addition, the potential effects of bat properties on swing mechanics are discussed. The paths of the hands and the centre-of-curvature of the bat relative to the horizontal plane appear to be important trajectory characteristics of the swing. Descriptions of the swing mechanics and practical implications are offered based upon these findings. Key Points The female softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. The paths of the grip point, bat centre-of-curvature, CG, and COP are complex yet reveal consistent patterns among subjects indicating that these patterns are fundamental components of the swing. The most important mechanical quantity relative to generating bat speed is the total work applied to the bat from the batter. Computer modeling of the softball swing is a viable means for study of the fundamental mechanics of the swing motion, the interactions between the batter and the bat, and the energy transfers between the two. PMID:24570623

  18. A three-dimensional kinematic and kinetic study of the college-level female softball swing.

    PubMed

    Milanovich, Monica; Nesbit, Steven M

    2014-01-01

    This paper quantifies and discusses the three-dimensional kinematic and kinetic characteristics of the female softball swing as performed by fourteen female collegiate amateur subjects. The analyses were performed using a three-dimensional computer model. The model was driven kinematically from subject swings data that were recorded with a multi-camera motion analysis system. Each subject used two distinct bats with significantly different inertial properties. Model output included bat trajectories, subject/bat interaction forces and torques, work, and power. These data formed the basis for a detailed analysis and description of fundamental swing kinematic and kinetic quantities. The analyses revealed that the softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. In addition, the potential effects of bat properties on swing mechanics are discussed. The paths of the hands and the centre-of-curvature of the bat relative to the horizontal plane appear to be important trajectory characteristics of the swing. Descriptions of the swing mechanics and practical implications are offered based upon these findings. Key PointsThe female softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities.The paths of the grip point, bat centre-of-curvature, CG, and COP are complex yet reveal consistent patterns among subjects indicating that these patterns are fundamental components of the swing.The most important mechanical quantity relative to generating bat speed is the total work applied to the bat from the batter.Computer modeling of the softball swing is a viable means for study of the fundamental mechanics of the swing motion, the interactions between the batter and the bat, and the energy transfers between the two.

  19. A computer model simulating human glucose absorption and metabolism in health and metabolic disease states

    PubMed Central

    Naftalin, Richard J.

    2016-01-01

    A computer model designed to simulate integrated glucose-dependent changes in splanchnic blood flow with small intestinal glucose absorption, hormonal and incretin circulation and hepatic and systemic metabolism in health and metabolic diseases e.g. non-alcoholic fatty liver disease, (NAFLD), non-alcoholic steatohepatitis, (NASH) and type 2 diabetes mellitus, (T2DM) demonstrates how when glucagon-like peptide-1, (GLP-1) is synchronously released into the splanchnic blood during intestinal glucose absorption, it stimulates superior mesenteric arterial (SMA) blood flow and by increasing passive intestinal glucose absorption, harmonizes absorption with its distribution and metabolism. GLP-1 also synergises insulin-dependent net hepatic glucose uptake (NHGU). When GLP-1 secretion is deficient post-prandial SMA blood flow is not increased and as NHGU is also reduced, hyperglycaemia follows. Portal venous glucose concentration is also raised, thereby retarding the passive component of intestinal glucose absorption.   Increased pre-hepatic sinusoidal resistance combined with portal hypertension leading to opening of intrahepatic portosystemic collateral vessels are NASH-related mechanical defects that alter the balance between splanchnic and systemic distributions of glucose, hormones and incretins.The model reveals the latent contribution of portosystemic shunting in development of metabolic disease. This diverts splanchnic blood content away from the hepatic sinuses to the systemic circulation, particularly during the glucose absorptive phase of digestion, resulting in inappropriate increases in insulin-dependent systemic glucose metabolism.  This hastens onset of hypoglycaemia and thence hyperglucagonaemia. The model reveals that low rates of GLP-1 secretion, frequently associated with T2DM and NASH, may be also be caused by splanchnic hypoglycaemia, rather than to intrinsic loss of incretin secretory capacity. These findings may have therapeutic implications on GLP-1 agonist or glucagon antagonist usage. PMID:27347379

  20. Switch of Sensitivity Dynamics Revealed with DyGloSA Toolbox for Dynamical Global Sensitivity Analysis as an Early Warning for System's Critical Transition

    PubMed Central

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA – a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits. PMID:24367574

  1. Switch of sensitivity dynamics revealed with DyGloSA toolbox for dynamical global sensitivity analysis as an early warning for system's critical transition.

    PubMed

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA - a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits.

  2. Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration

    PubMed Central

    Lobo, Daniel; Levin, Michael

    2015-01-01

    Transformative applications in biomedicine require the discovery of complex regulatory networks that explain the development and regeneration of anatomical structures, and reveal what external signals will trigger desired changes of large-scale pattern. Despite recent advances in bioinformatics, extracting mechanistic pathway models from experimental morphological data is a key open challenge that has resisted automation. The fundamental difficulty of manually predicting emergent behavior of even simple networks has limited the models invented by human scientists to pathway diagrams that show necessary subunit interactions but do not reveal the dynamics that are sufficient for complex, self-regulating pattern to emerge. To finally bridge the gap between high-resolution genetic data and the ability to understand and control patterning, it is critical to develop computational tools to efficiently extract regulatory pathways from the resultant experimental shape phenotypes. For example, planarian regeneration has been studied for over a century, but despite increasing insight into the pathways that control its stem cells, no constructive, mechanistic model has yet been found by human scientists that explains more than one or two key features of its remarkable ability to regenerate its correct anatomical pattern after drastic perturbations. We present a method to infer the molecular products, topology, and spatial and temporal non-linear dynamics of regulatory networks recapitulating in silico the rich dataset of morphological phenotypes resulting from genetic, surgical, and pharmacological experiments. We demonstrated our approach by inferring complete regulatory networks explaining the outcomes of the main functional regeneration experiments in the planarian literature; By analyzing all the datasets together, our system inferred the first systems-biology comprehensive dynamical model explaining patterning in planarian regeneration. This method provides an automated, highly generalizable framework for identifying the underlying control mechanisms responsible for the dynamic regulation of growth and form. PMID:26042810

  3. Competition of information channels in the spreading of innovations

    NASA Astrophysics Data System (ADS)

    Kocsis, Gergely; Kun, Ferenc

    2011-08-01

    We study the spreading of information on technological developments in socioeconomic systems where the social contacts of agents are represented by a network of connections. In the model, agents get informed about the existence and advantages of new innovations through advertising activities of producers, which are then followed by an interagent information transfer. Computer simulations revealed that varying the strength of external driving and of interagent coupling, furthermore, the topology of social contacts, the model presents a complex behavior with interesting novel features: On the macrolevel the system exhibits logistic behavior typical for the diffusion of innovations. The time evolution can be described analytically by an integral equation that captures the nucleation and growth of clusters of informed agents. On the microlevel, small clusters are found to be compact with a crossover to fractal structures with increasing size. The distribution of cluster sizes has a power-law behavior with a crossover to a higher exponent when long-range social contacts are present in the system. Based on computer simulations we construct an approximate phase diagram of the model on a regular square lattice of agents.

  4. Experimental and Computational Interrogation of Fast SCR Mechanism and Active Sites on H-Form SSZ-13

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sichi; Zheng, Yang; Gao, Feng

    Experiment and density functional theory (DFT) models are combined to develop a unified, quantitative model of the mechanism and kinetics of fast selective catalytic reduction (SCR) of NO/NO2 mixtures over H-SSZ-13 zeolite. Rates, rate orders, and apparent activation energies collected under differential conditions reveal two distinct kinetic regimes. First-principles thermodynamics simulations are used to determine the relative coverages of free Brønsted sites, chemisorbed NH4+ and physisorbed NH3 as a function of reaction conditions. First-principles metadynamics calculations show that all three sites can contribute to the rate-limiting N-N bond forming step in fast SCR. The results are used to parameterize amore » kinetic model that encompasses the full range of reaction conditions and recovers observed rate orders and apparent activation energies. Observed kinetic regimes are related to changes in most-abundant surface intermediates. Financial support was provided by the National Science Foundation GAOLI program under award number 1258690-CBET. We thank the Center for Research Computing at Notre« less

  5. HomoTarget: a new algorithm for prediction of microRNA targets in Homo sapiens.

    PubMed

    Ahmadi, Hamed; Ahmadi, Ali; Azimzadeh-Jamalkandi, Sadegh; Shoorehdeli, Mahdi Aliyari; Salehzadeh-Yazdi, Ali; Bidkhori, Gholamreza; Masoudi-Nejad, Ali

    2013-02-01

    MiRNAs play an essential role in the networks of gene regulation by inhibiting the translation of target mRNAs. Several computational approaches have been proposed for the prediction of miRNA target-genes. Reports reveal a large fraction of under-predicted or falsely predicted target genes. Thus, there is an imperative need to develop a computational method by which the target mRNAs of existing miRNAs can be correctly identified. In this study, combined pattern recognition neural network (PRNN) and principle component analysis (PCA) architecture has been proposed in order to model the complicated relationship between miRNAs and their target mRNAs in humans. The results of several types of intelligent classifiers and our proposed model were compared, showing that our algorithm outperformed them with higher sensitivity and specificity. Using the recent release of the mirBase database to find potential targets of miRNAs, this model incorporated twelve structural, thermodynamic and positional features of miRNA:mRNA binding sites to select target candidates. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Competition of information channels in the spreading of innovations.

    PubMed

    Kocsis, Gergely; Kun, Ferenc

    2011-08-01

    We study the spreading of information on technological developments in socioeconomic systems where the social contacts of agents are represented by a network of connections. In the model, agents get informed about the existence and advantages of new innovations through advertising activities of producers, which are then followed by an interagent information transfer. Computer simulations revealed that varying the strength of external driving and of interagent coupling, furthermore, the topology of social contacts, the model presents a complex behavior with interesting novel features: On the macrolevel the system exhibits logistic behavior typical for the diffusion of innovations. The time evolution can be described analytically by an integral equation that captures the nucleation and growth of clusters of informed agents. On the microlevel, small clusters are found to be compact with a crossover to fractal structures with increasing size. The distribution of cluster sizes has a power-law behavior with a crossover to a higher exponent when long-range social contacts are present in the system. Based on computer simulations we construct an approximate phase diagram of the model on a regular square lattice of agents.

  7. FIT: statistical modeling tool for transcriptome dynamics under fluctuating field conditions

    PubMed Central

    Iwayama, Koji; Aisaka, Yuri; Kutsuna, Natsumaro

    2017-01-01

    Abstract Motivation: Considerable attention has been given to the quantification of environmental effects on organisms. In natural conditions, environmental factors are continuously changing in a complex manner. To reveal the effects of such environmental variations on organisms, transcriptome data in field environments have been collected and analyzed. Nagano et al. proposed a model that describes the relationship between transcriptomic variation and environmental conditions and demonstrated the capability to predict transcriptome variation in rice plants. However, the computational cost of parameter optimization has prevented its wide application. Results: We propose a new statistical model and efficient parameter optimization based on the previous study. We developed and released FIT, an R package that offers functions for parameter optimization and transcriptome prediction. The proposed method achieves comparable or better prediction performance within a shorter computational time than the previous method. The package will facilitate the study of the environmental effects on transcriptomic variation in field conditions. Availability and Implementation: Freely available from CRAN (https://cran.r-project.org/web/packages/FIT/). Contact: anagano@agr.ryukoku.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online PMID:28158396

  8. Analysis of Composite Skin-Stiffener Debond Specimens Using Volume Elements and a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.

  9. FACE computer simulation. [Flexible Arm Controls Experiment

    NASA Technical Reports Server (NTRS)

    Sadeh, Willy Z.; Szmyd, Jeffrey A.

    1990-01-01

    A computer simulation of the FACE (Flexible Arm Controls Experiment) was conducted to assess its design for use in the Space Shuttle. The FACE is supposed to be a 14-ft long articulate structure with 4 degrees of freedom, consisting of shoulder pitch and yaw, elbow pitch, and wrist pitch. Kinematics of the FACE was simulated to obtain data on arm operation, function, workspace and interaction. Payload capture ability was modeled. The simulation indicates the capability for detailed kinematic simulation and payload capture ability analysis, and the feasibility of real-time simulation was determined. In addition, the potential for interactive real-time training through integration of the simulation with various interface controllers was revealed. At this stage, the flexibility of the arm was not yet considered.

  10. Computational analysis of the binding ability of heterocyclic and conformationally constrained epibatidine analogs in the neuronal nicotinic acetylcholine receptor.

    PubMed

    Soriano, Elena; Marco-Contelles, José; Colmena, Inés; Gandía, Luis

    2010-05-01

    One of the most critical issues on the study of ligand-receptor interactions in drug design is the knowledge of the bioactive conformation of the ligand. In this study, we describe a computational approach aimed at estimating the binding ability of epibatidine analogs to interact with the neuronal nicotinic acetylcholine receptor (nAChR) and get insights into the bioactive conformation. The protocol followed consists of a docking analysis and evaluation of pharmacophore parameters of the docked structures. On the basis of the biological data, the results have revealed that the docking analysis is able to predict active ligands, whereas further efforts are needed to develop a suitable and solid pharmacophore model.

  11. Numerical simulation of the helium gas spin-up channel performance of the relativity gyroscope

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.; Edgell, Josephine; Zhang, Burt X.

    1991-01-01

    The dependence of the spin-up system efficiency on each geometrical parameter of the spin-up channel and the exhaust passage of the Gravity Probe-B (GPB) is individually investigated. The spin-up model is coded into a computer program which simulates the spin-up process. Numerical results reveal optimal combinations of the geometrical parameters for the ultimate spin-up performance. Comparisons are also made between the numerical results and experimental data. The experimental leakage rate can only be reached when the gap between the channel lip and the rotor surface increases beyond physical limit. The computed rotating frequency is roughly twice as high as the measured ones although the spin-up torques fairly match.

  12. Experimental and computational analysis of a large protein network that controls fat storage reveals the design principles of a signaling network.

    PubMed

    Al-Anzi, Bader; Arpp, Patrick; Gerges, Sherif; Ormerod, Christopher; Olsman, Noah; Zinn, Kai

    2015-05-01

    An approach combining genetic, proteomic, computational, and physiological analysis was used to define a protein network that regulates fat storage in budding yeast (Saccharomyces cerevisiae). A computational analysis of this network shows that it is not scale-free, and is best approximated by the Watts-Strogatz model, which generates "small-world" networks with high clustering and short path lengths. The network is also modular, containing energy level sensing proteins that connect to four output processes: autophagy, fatty acid synthesis, mRNA processing, and MAP kinase signaling. The importance of each protein to network function is dependent on its Katz centrality score, which is related both to the protein's position within a module and to the module's relationship to the network as a whole. The network is also divisible into subnetworks that span modular boundaries and regulate different aspects of fat metabolism. We used a combination of genetics and pharmacology to simultaneously block output from multiple network nodes. The phenotypic results of this blockage define patterns of communication among distant network nodes, and these patterns are consistent with the Watts-Strogatz model.

  13. Analysis of film cooling in rocket nozzles

    NASA Technical Reports Server (NTRS)

    Woodbury, Keith A.; Karr, Gerald R.

    1992-01-01

    Progress during the reporting period is summarized. Analysis of film cooling in rocket nozzles by computational fluid dynamics (CFD) computer codes is desirable for two reasons. First, it allows prediction of resulting flow fields within the rocket nozzle, in particular the interaction of the coolant boundary layer with the main flow. This facilitates evaluation of potential cooling configurations with regard to total thrust, etc., before construction and testing of any prototype. Secondly, CFD simulation of film cooling allows for assessment of the effectiveness of the proposed cooling in limiting nozzle wall temperature rises. This latter objective is the focus of the current work. The desired objective is to use the Finite Difference Navier Stokes (FDNS) code to predict wall heat fluxes or wall temperatures in rocket nozzles. As prior work has revealed that the FDNS code is deficient in the thermal modeling of boundary conditions, the first step is to correct these deficiencies in the FDNS code. Next, these changes must be tested against available data. Finally, the code will be used to model film cooling of a particular rocket nozzle. The third task of this research, using the modified code to compute the flow of hot gases through a nozzle, is described.

  14. Optical coherence tomography and computer-aided diagnosis of a murine model of chronic kidney disease

    NASA Astrophysics Data System (ADS)

    Wang, Bohan; Wang, Hsing-Wen; Guo, Hengchang; Anderson, Erik; Tang, Qinggong; Wu, Tongtong; Falola, Reuben; Smith, Tikina; Andrews, Peter M.; Chen, Yu

    2017-12-01

    Chronic kidney disease (CKD) is characterized by a progressive loss of renal function over time. Histopathological analysis of the condition of glomeruli and the proximal convolutional tubules over time can provide valuable insights into the progression of CKD. Optical coherence tomography (OCT) is a technology that can analyze the microscopic structures of a kidney in a nondestructive manner. Recently, we have shown that OCT can provide real-time imaging of kidney microstructures in vivo without administering exogenous contrast agents. A murine model of CKD induced by intravenous Adriamycin (ADR) injection is evaluated by OCT. OCT images of the rat kidneys have been captured every week up to eight weeks. Tubular diameter and hypertrophic tubule population of the kidneys at multiple time points after ADR injection have been evaluated through a fully automated computer-vision system. Results revealed that mean tubular diameter and hypertrophic tubule population increase with time in post-ADR injection period. The results suggest that OCT images of the kidney contain abundant information about kidney histopathology. Fully automated computer-aided diagnosis based on OCT has the potential for clinical evaluation of CKD conditions.

  15. A Computational and Experimental Investigation of Shear Coaxial Jet Atomization

    NASA Technical Reports Server (NTRS)

    Ibrahim, Essam A.; Kenny, R. Jeremy; Walker, Nathan B.

    2006-01-01

    The instability and subsequent atomization of a viscous liquid jet emanated into a high-pressure gaseous surrounding is studied both computationally and experimentally. Liquid water issued into nitrogen gas at elevated pressures is used to simulate the flow conditions in a coaxial shear injector element relevant to liquid propellant rocket engines. The theoretical analysis is based on a simplified mathematical formulation of the continuity and momentum equations in their conservative form. Numerical solutions of the governing equations subject to appropriate initial and boundary conditions are obtained via a robust finite difference scheme. The computations yield real-time evolution and subsequent breakup characteristics of the liquid jet. The experimental investigation utilizes a digital imaging technique to measure resultant drop sizes. Data were collected for liquid Reynolds number between 2,500 and 25,000, aerodynamic Weber number range of 50-500 and ambient gas pressures from 150 to 1200 psia. Comparison of the model predictions and experimental data for drop sizes at gas pressures of 150 and 300 psia reveal satisfactory agreement particularly for lower values of investigated Weber number. The present model is intended as a component of a practical tool to facilitate design and optimization of coaxial shear atomizers.

  16. New Numerical Approaches To thermal Convection In A Compositionally Stratified Fluid

    NASA Astrophysics Data System (ADS)

    Puckett, E. G.; Turcotte, D. L.; Kellogg, L. H.; Lokavarapu, H. V.; He, Y.; Robey, J.

    2016-12-01

    Seismic imaging of the mantle has revealed large and small scale heterogeneities in the lower mantle; specifically structures known as large low shear velocity provinces (LLSVP) below Africa and the South Pacific. Most interpretations propose that the heterogeneities are compositional in nature, differing from the overlying mantle, an interpretation that would be consistent with chemical geodynamic models. The LLSVP's are thought to be very old, meaning they have persisted thoughout much of Earth's history. Numerical modeling of persistent compositional interfaces present challenges to even state-of-the-art numerical methodology. It is extremely difficult to maintain sharp composition boundaries which migrate and distort with time dependent fingering without compositional diffusion and / or artificial diffusion. The compositional boundary must persist indefinitely. In this work we present computations of an initial compositionally stratified fluid that is subject to a thermal gradient ΔT = T1 - T0 across the height D of a rectangular domain over a range of buoyancy numbers B and Rayleigh numbers Ra. In these computations we compare three numerical approaches to modeling the movement of two distinct, thermally driven, compositional fields; namely, a high-order Finte Element Method (FEM) that employs artifical viscosity to preserve the maximum and minimum values of the compositional field, a Discontinous Galerkin (DG) method with a Bound Preserving (BP) limiter, and a Volume-of-Fluid (VOF) interface tracking algorithm. Our computations demonstrate that the FEM approach has far too much numerical diffusion to yield meaningful results, the DGBP method yields much better resuts but with small amounts of each compositional field being (numerically) entrained within the other compositional field, while the VOF method maintains a sharp interface between the two compositions throughout the computation. In the figure we show a comparison of between the three methods for a computation made with B = 1.111 and Ra = 10,000 after the flow has reached 'steady state'. (R) the images computed with the standard FEM method (with artifical viscosity), (C) the images computed with the DGBP method (with no artifical viscosity or diffusion due to discretization errors) and (L) the images computed with the VOF algorithm.

  17. Emotor control: computations underlying bodily resource allocation, emotions, and confidence

    PubMed Central

    Kepecs, Adam; Mensh, Brett D.

    2015-01-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience—approaching subjective behavior as the result of mental computations instantiated in the brain—to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This “emotor” control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on “confidence.” Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior. PMID:26869840

  18. Gestalt isomorphism and the primacy of subjective conscious experience: a Gestalt Bubble model.

    PubMed

    Lehar, Steven

    2003-08-01

    A serious crisis is identified in theories of neurocomputation, marked by a persistent disparity between the phenomenological or experiential account of visual perception and the neurophysiological level of description of the visual system. In particular, conventional concepts of neural processing offer no explanation for the holistic global aspects of perception identified by Gestalt theory. The problem is paradigmatic and can be traced to contemporary concepts of the functional role of the neural cell, known as the Neuron Doctrine. In the absence of an alternative neurophysiologically plausible model, I propose a perceptual modeling approach, to model the percept as experienced subjectively, rather than modeling the objective neurophysiological state of the visual system that supposedly subserves that experience. A Gestalt Bubble model is presented to demonstrate how the elusive Gestalt principles of emergence, reification, and invariance can be expressed in a quantitative model of the subjective experience of visual consciousness. That model in turn reveals a unique computational strategy underlying visual processing, which is unlike any algorithm devised by man, and certainly unlike the atomistic feed-forward model of neurocomputation offered by the Neuron Doctrine paradigm. The perceptual modeling approach reveals the primary function of perception as that of generating a fully spatial virtual-reality replica of the external world in an internal representation. The common objections to this "picture-in-the-head" concept of perceptual representation are shown to be ill founded.

  19. Bayesian spatial transformation models with applications in neuroimaging data.

    PubMed

    Miranda, Michelle F; Zhu, Hongtu; Ibrahim, Joseph G

    2013-12-01

    The aim of this article is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. The proposed STM include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov random field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. © 2013, The International Biometric Society.

  20. Direction of Coupling from Phases of Interacting Oscillators: A Permutation Information Approach

    NASA Astrophysics Data System (ADS)

    Bahraminasab, A.; Ghasemi, F.; Stefanovska, A.; McClintock, P. V. E.; Kantz, H.

    2008-02-01

    We introduce a directionality index for a time series based on a comparison of neighboring values. It can distinguish unidirectional from bidirectional coupling, as well as reveal and quantify asymmetry in bidirectional coupling. It is tested on a numerical model of coupled van der Pol oscillators, and applied to cardiorespiratory data from healthy subjects. There is no need for preprocessing and fine-tuning the parameters, which makes the method very simple, computationally fast and robust.

  1. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    PubMed Central

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649

  2. Intrinsic islet heterogeneity and gap junction coupling determine spatiotemporal Ca²⁺ wave dynamics.

    PubMed

    Benninger, Richard K P; Hutchens, Troy; Head, W Steven; McCaughey, Michael J; Zhang, Min; Le Marchand, Sylvain J; Satin, Leslie S; Piston, David W

    2014-12-02

    Insulin is released from the islets of Langerhans in discrete pulses that are linked to synchronized oscillations of intracellular free calcium ([Ca(2+)]i). Associated with each synchronized oscillation is a propagating calcium wave mediated by Connexin36 (Cx36) gap junctions. A computational islet model predicted that waves emerge due to heterogeneity in β-cell function throughout the islet. To test this, we applied defined patterns of glucose stimulation across the islet using a microfluidic device and measured how these perturbations affect calcium wave propagation. We further investigated how gap junction coupling regulates spatiotemporal [Ca(2+)]i dynamics in the face of heterogeneous glucose stimulation. Calcium waves were found to originate in regions of the islet having elevated excitability, and this heterogeneity is an intrinsic property of islet β-cells. The extent of [Ca(2+)]i elevation across the islet in the presence of heterogeneity is gap-junction dependent, which reveals a glucose dependence of gap junction coupling. To better describe these observations, we had to modify the computational islet model to consider the electrochemical gradient between neighboring β-cells. These results reveal how the spatiotemporal [Ca(2+)]i dynamics of the islet depend on β-cell heterogeneity and cell-cell coupling, and are important for understanding the regulation of coordinated insulin release across the islet. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  3. Cost-effectiveness of breast cancer screening policies using simulation.

    PubMed

    Gocgun, Y; Banjevic, D; Taghipour, S; Montgomery, N; Harvey, B J; Jardine, A K S; Miller, A B

    2015-08-01

    In this paper, we study breast cancer screening policies using computer simulation. We developed a multi-state Markov model for breast cancer progression, considering both the screening and treatment stages of breast cancer. The parameters of our model were estimated through data from the Canadian National Breast Cancer Screening Study as well as data in the relevant literature. Using computer simulation, we evaluated various screening policies to study the impact of mammography screening for age-based subpopulations in Canada. We also performed sensitivity analysis to examine the impact of certain parameters on number of deaths and total costs. The analysis comparing screening policies reveals that a policy in which women belonging to the 40-49 age group are not screened, whereas those belonging to the 50-59 and 60-69 age groups are screened once every 5 years, outperforms others with respect to cost per life saved. Our analysis also indicates that increasing the screening frequencies for the 50-59 and 60-69 age groups decrease mortality, and that the average number of deaths generally decreases with an increase in screening frequency. We found that screening annually for all age groups is associated with the highest costs per life saved. Our analysis thus reveals that cost per life saved increases with an increase in screening frequency. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. F-18-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Appearance of Extramedullary Hematopoesis in a Case of Primary Myelofibrosis

    PubMed Central

    Mukherjee, Anirban; Bal, Chandrasekhar; Tripathi, Madhavi; Das, Chandan Jyoti; Shamim, Shamim Ahmed

    2017-01-01

    A 44-year-old female with known primary myelofibrosis presented with shortness of breath. High Resolution Computed Tomography thorax revealed large heterogeneously enhancing extraparenchymal soft tissue density mass involving bilateral lung fields. F-18-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography revealed mildly FDG avid soft tissue density mass with specks of calcification involving bilateral lung fields, liver, and spleen. Subsequent histopathologic evaluation from the right lung mass was suggestive of extramedullary hematopoesis. PMID:28533647

  5. Assessing the effect of adding interactive modeling to the geoscience curriculum

    NASA Astrophysics Data System (ADS)

    Castillo, A.; Marshall, J.; Cardenas, M.

    2013-12-01

    Technology and computer models enhance the learning experience when appropriately utilized. Moreover, learning is significantly improved when effective visualization is combined with models of processes allowing for inquiry-based problem solving. Still, hands-on experiences in real scenarios result in better contextualization of related problems compared to virtual laboratories. Therefore, the role of scientific visualization, technology, and computer modeling is to enhance, not displace, the learning experience by supplementing real-world problem solving and experiences, although in some circumstances, they can adequately serve to take the place of reality. The key to improving scientific education is to embrace an inquiry-based approach that favorably uses technology. This study will attempt to evaluate the effect of adding interactive modeling to the geological sciences curriculum. An assessment tool, designed to assess student understanding of physical hydrology, was used to evaluate a curriculum intervention based on student learning with a data- and modeling-driven approach using COMSOL Multiphysics software. This intervention was implemented in an upper division and graduate physical hydrology course in fall 2012. Students enrolled in the course in fall 2011 served as the control group. Interactive modeling was added to the curriculum in fall 2012 to replace the analogous mathematical modeling done by hand in fall 2011. Pre- and post-test results were used to assess and report its effectiveness. Student interviews were also used to probe student reactions to both the experimental and control curricula. The pre- and post-tests asked students to describe the significant processes in the hydrological cycle and describe the laws governing these processes. Their ability to apply their knowledge in a real-world problem was also assessed. Since the pre- and post-test data failed to meet the assumption of normality, a non-parametric Kruskal-Wallis test was run to determine if there were differences in pre- and post-test scores among the 2011 and 2012 groups. Results reveal significant differences in pretest and posttest scores among the 2011 and 2012 groups. Interview data revealed that students experience both affordances and barriers to using geoscience learning tools. Important affordances included COMSOL's modeling capabilities, the visualizations it offers, as well as the opportunity to use the software in the course. Barriers included lack of COMSOL experience, difficulty with COMSOL instructions, and lack of instruction with the software. Results from this study revealed that a well-designed pre- and post-assessment can be used to infer whether a given instructional intervention has caused a change in understanding in a given group of students, but the results are not necessarily generalizable. However, the student interviews, which were used to probe student reactions to both the experimental and control curricula, revealed that students experience both affordances and barriers to geoscience learning tools. This result has limitations including the number of participants, all from one institution, but the assessment tool was useful to assess the effect of adding interactive modeling to the geoscience curriculum. Supported by NSF CAREER grant (EAR-0955750).

  6. Laser interferometer skin-friction measurements of crossing-shock-wave/turbulent-boundary-layer interactions

    NASA Technical Reports Server (NTRS)

    Garrison, T. J.; Settles, G. S.; Narayanswami, N.; Knight, D. D.

    1994-01-01

    Wall shear stress measurements beneath crossing-shock-wave/turbulent boundary-layer interactions have been made for three interactions of different strengths. The interactions are generated by two sharp fins at symetric angles of attack mounted on a flat plate. The shear stress measurements were made for fin angles of 7 and 11 deg at Mach 3 and 15 deg at Mach 3.85. The measurements were made using a laser interferometer skin-friction meter, a device that determines the wall shear by optically measuring the time rate of thinning of an oil film placed on the test model surface. Results of the measurements reveal high skin-friction coefficients in the vicinity of the fin/plate junction and the presence of quasi-two-dimensional flow separation on the interaction center line. Additionally, two Navier-Stokes computations, one using a Baldwin-Lomax turbulence model and one using a k-epsilon model, are compared with the experimental results for the Mach 3.85, 15-deg interaction case. Although the k-epsilon model did a reasonable job of predicting the overall trend in portions of the skin-friction distribution, neither computation fully captured the physics of the near-surface flow in this complex interaction.

  7. A study of the thermal and optical characteristics of radiometric channels for Earth radiation budget applications

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Tira, Nour E.

    1991-01-01

    An improved dynamic electrothermal model for the Earth Radiation Budget Experiment (ERBE) total, nonscanning channels is formulated. This model is then used to accurately simulate two types of dynamic solar observation: the solar calibration and the so-called pitchover maneuver. Using a second model, the nonscanner active cavity radiometer (ACR) thermal noise is studied. This study reveals that radiative emission and scattering by the surrounding parts of the nonscanner cavity are acceptably small. The dynamic electrothermal model is also used to compute ACR instrument transfer function. Accurate in-flight measurement of this transfer function is shown to depend on the energy distribution over the frequency spectrum of the radiation input function. A new array-type field of view limiter, whose geometry controls the input function, is proposed for in-flight calibration of an ACR and other types of radiometers. The point spread function (PSF) of the ERBE and the Clouds and Earth's Radiant Energy System (CERES) scanning radiometers is computed. The PSF is useful in characterizing the channel optics. It also has potential for recovering the distribution of the radiative flux from Earth by deconvolution.

  8. Laser Interferometer Skin-Friction measurements of crossing-shock wave/turbulent boundary-layer interactions

    NASA Technical Reports Server (NTRS)

    Garrison, T. J.; Settles, G. S.

    1993-01-01

    Wall shear stress measurements beneath crossingshock wave/turbulent boundary-layer interactions have been made for three interactions of different strengths. The interactions are generated by two sharp fins at symmetric angles of attack mounted on a flat plate. The shear stress measurements were made for fin angles of 7 and 11 degrees at Mach 3 and 15 degrees at Mach 4. The measurements were made using a Laser Interferometer Skin Friction (LISF) meter; a device which determines the wail shear by optically measuring the time rate of thinning of an oil film placed on the test model surface. Results of the measurements reveal high skin friction coefficients in the vicinity of the fin/plate junction and the presence of quasi-two-dimensional flow separation on the interaction centerline. Additionally, two Navier-Stokes computations, one using a Baldwin-Lomax turbulence model and one using a k- model, are compared to the experimental results for the Mach 4, 15 degree interaction case. While the k- model did a reasonable job of predicting the overall trend in portions of the skin friction distribution, neither computation fully captured the physics of the near surface flow in this complex interaction.

  9. A surface spherical harmonic expansion of gravity anomalies on the ellipsoid

    NASA Astrophysics Data System (ADS)

    Claessens, S. J.; Hirt, C.

    2015-10-01

    A surface spherical harmonic expansion of gravity anomalies with respect to a geodetic reference ellipsoid can be used to model the global gravity field and reveal its spectral properties. In this paper, a direct and rigorous transformation between solid spherical harmonic coefficients of the Earth's disturbing potential and surface spherical harmonic coefficients of gravity anomalies in ellipsoidal approximation with respect to a reference ellipsoid is derived. This transformation cannot rigorously be achieved by the Hotine-Jekeli transformation between spherical and ellipsoidal harmonic coefficients. The method derived here is used to create a surface spherical harmonic model of gravity anomalies with respect to the GRS80 ellipsoid from the EGM2008 global gravity model. Internal validation of the model shows a global RMS precision of 1 nGal. This is significantly more precise than previous solutions based on spherical approximation or approximations to order or , which are shown to be insufficient for the generation of surface spherical harmonic coefficients with respect to a geodetic reference ellipsoid. Numerical results of two applications of the new method (the computation of ellipsoidal corrections to gravimetric geoid computation, and area means of gravity anomalies in ellipsoidal approximation) are provided.

  10. Computational design of hepatitis C vaccines using maximum entropy models and population dynamics

    NASA Astrophysics Data System (ADS)

    Hart, Gregory; Ferguson, Andrew

    Hepatitis C virus (HCV) afflicts 170 million people and kills 350,000 annually. Vaccination offers the most realistic and cost effective hope of controlling this epidemic. Despite 20 years of research, no vaccine is available. A major obstacle is the virus' extreme genetic variability and rapid mutational escape from immune pressure. Improvements in the vaccine design process are urgently needed. Coupling data mining with spin glass models and maximum entropy inference, we have developed a computational approach to translate sequence databases into empirical fitness landscapes. These landscapes explicitly connect viral genotype to phenotypic fitness and reveal vulnerable targets that can be exploited to rationally design immunogens. Viewing these landscapes as the mutational ''playing field'' over which the virus is constrained to evolve, we have integrated them with agent-based models of the viral mutational and host immune response dynamics, establishing a data-driven immune simulator of HCV infection. We have employed this simulator to perform in silico screening of HCV immunogens. By systematically identifying a small number of promising vaccine candidates, these models can accelerate the search for a vaccine by massively reducing the experimental search space.

  11. Computational fluid dynamic modeling of a medium-sized surface mine blasthole drill shroud

    PubMed Central

    Zheng, Y.; Reed, W.R.; Zhou, L.; Rider, J.P.

    2016-01-01

    The Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health (NIOSH) recently developed a series of models using computational fluid dynamics (CFD) to study airflows and respirable dust distribution associated with a medium-sized surface blasthole drill shroud with a dry dust collector system. Previously run experiments conducted in NIOSH’s full-scale drill shroud laboratory were used to validate the models. The setup values in the CFD models were calculated from experimental data obtained from the drill shroud laboratory and measurements of test material particle size. Subsequent simulation results were compared with the experimental data for several test scenarios, including 0.14 m3/s (300 cfm) and 0.24 m3/s (500 cfm) bailing airflow with 2:1, 3:1 and 4:1 dust collector-to-bailing airflow ratios. For the 2:1 and 3:1 ratios, the calculated dust concentrations from the CFD models were within the 95 percent confidence intervals of the experimental data. This paper describes the methodology used to develop the CFD models, to calculate the model input and to validate the models based on the experimental data. Problem regions were identified and revealed by the study. The simulation results could be used for future development of dust control methods for a surface mine blasthole drill shroud. PMID:27932851

  12. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  13. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  14. Perceptual control models of pursuit manual tracking demonstrate individual specificity and parameter consistency.

    PubMed

    Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren

    2017-11-01

    Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.

  15. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    NASA Astrophysics Data System (ADS)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  16. Within- and across-trial dynamics of human EEG reveal cooperative interplay between reinforcement learning and working memory.

    PubMed

    Collins, Anne G E; Frank, Michael J

    2018-03-06

    Learning from rewards and punishments is essential to survival and facilitates flexible human behavior. It is widely appreciated that multiple cognitive and reinforcement learning systems contribute to decision-making, but the nature of their interactions is elusive. Here, we leverage methods for extracting trial-by-trial indices of reinforcement learning (RL) and working memory (WM) in human electro-encephalography to reveal single-trial computations beyond that afforded by behavior alone. Neural dynamics confirmed that increases in neural expectation were predictive of reduced neural surprise in the following feedback period, supporting central tenets of RL models. Within- and cross-trial dynamics revealed a cooperative interplay between systems for learning, in which WM contributes expectations to guide RL, despite competition between systems during choice. Together, these results provide a deeper understanding of how multiple neural systems interact for learning and decision-making and facilitate analysis of their disruption in clinical populations.

  17. Nonlinear Modeling of Radial Stellar Pulsations

    NASA Astrophysics Data System (ADS)

    Smolec, R.

    2009-09-01

    In this thesis, I present the results of my work concerning the nonlinear modeling of radial stellar pulsations. I will focus on classical Cepheids, particularly on the double-mode phenomenon. History of nonlinear modeling of radial stellar pulsations begins in the sixties of the previous century. At the beginning convection was disregarded in model equations. Qualitatively, almost all features of the radial pulsators were successfully modeled with purely radiative hydrocodes. Among problems that remained, the most disturbing was modeling of the double-mode phenomenon. This long-standing problem seemed to be finally solved with the inclusion of turbulent convection into the model equations (Kollath et al. 1998, Feuchtinger 1998). Although dynamical aspects of the double-mode behaviour were extensively studied, its origin, particularly the specific role played by convection, remained obscure. To study this and other problems of radial stellar pulsations, I implemented the convection into pulsation hydrocodes. The codes adopt the Kuhfuss (1986) convection model. In other codes, particularly in the Florida-Budapest hydrocode (e.g. Kollath et al. 2002), used in comput! ation of most of the published double-mode models, different approximations concerning e.g. eddy-viscous terms or treatment of convectively stable regions are adopted. Particularly the neglect of negative buoyancy effects in the Florida-Budapest code and its consequences, were never discussed in the literature. These consequences are severe. Concerning the single-mode pulsators, neglect of negative buoyancy leads to smaller pulsation amplitudes, in comparison to amplitudes computed with code including these effects. Particularly, neglect of negative buoyancy reduces the amplitude of the fundamental mode very strong. This property of the Florida-Budapest models is crucial in bringing up the stable non-resonant double-mode Cepheid pulsation involving fundamental and first overtone modes (F/1O). Such pulsation is not observed in models computed including negative buoyancy. As the neglect of negative buoyancy is physically not correct, so are the double-mode Cepheid models computed with the Florida-Budapest hydrocode. Extensive search for F/1O double-mode Cepheid pulsation with the codes including negative buoyancy effects yielded null result. Some resonant double-mode F/1O Cepheid models were found, but their occurrence was restricted to a very narrow domain in the Hertzsprung-Russel diagram. Model computations intended to model the double-overtone (1O/2O) Cepheids in the Large Magellanic Cloud, also revealed some stable double-mode pulsations, however, restricted to a narrow period range. Resonances are most likely conductive in bringing up the double-mode behaviour observed in these models. However, majority of the double-overtone LMC Cepheids cannot be reproduced with our codes. Hence, the modeling of double-overtone Cepheids with convective hydrocodes is not satisfactory, either. Double-mode pulsation still lacks satisfactory explanation, and problem of its modeling remains open.

  18. The impact on midlevel vision of statistically optimal divisive normalization in V1.

    PubMed

    Coen-Cagli, Ruben; Schwartz, Odelia

    2013-07-15

    The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality.

  19. Guided wave radiation from a point source in the proximity of a pipe bend

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brath, A. J.; Nagy, P. B.; Simonetti, F.

    Throughout the oil and gas industry corrosion and erosion damage monitoring play a central role in managing asset integrity. Recently, the use of guided wave technology in conjunction with tomography techniques has provided the possibility of obtaining point-by-point maps of wall thickness loss over the entire volume of a pipeline section between two ring arrays of ultrasonic transducers. However, current research has focused on straight pipes while little work has been done on pipe bends which are also the most susceptible to developing damage. Tomography of the bend is challenging due to the complexity and computational cost of the 3-Dmore » elastic model required to accurately describe guided wave propagation. To overcome this limitation, we introduce a 2-D anisotropic inhomogeneous acoustic model which represents a generalization of the conventional unwrapping used for straight pipes. The shortest-path ray-tracing method is then applied to the 2-D model to compute ray paths and predict the arrival times of the fundamental flexural mode, A0, excited by a point source on the straight section of pipe entering the bend and detected on the opposite side. Good agreement is found between predictions and experiments performed on an 8” diameter (D) pipe with 1.5 D bend radius. The 2-D model also reveals the existence of an acoustic lensing effect which leads to a focusing phenomenon also confirmed by the experiments. The computational efficiency of the 2-D model makes it ideally suited for tomography algorithms.« less

  20. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  1. Nonlinear computations shaping temporal processing of precortical vision.

    PubMed

    Butts, Daniel A; Cui, Yuwei; Casti, Alexander R R

    2016-09-01

    Computations performed by the visual pathway are constructed by neural circuits distributed over multiple stages of processing, and thus it is challenging to determine how different stages contribute on the basis of recordings from single areas. In the current article, we address this problem in the lateral geniculate nucleus (LGN), using experiments combined with nonlinear modeling capable of isolating various circuit contributions. We recorded cat LGN neurons presented with temporally modulated spots of various sizes, which drove temporally precise LGN responses. We utilized simultaneously recorded S-potentials, corresponding to the primary retinal ganglion cell (RGC) input to each LGN cell, to distinguish the computations underlying temporal precision in the retina from those in the LGN. Nonlinear models with excitatory and delayed suppressive terms were sufficient to explain temporal precision in the LGN, and we found that models of the S-potentials were nearly identical, although with a lower threshold. To determine whether additional influences shaped the response at the level of the LGN, we extended this model to use the S-potential input in combination with stimulus-driven terms to predict the LGN response. We found that the S-potential input "explained away" the major excitatory and delayed suppressive terms responsible for temporal patterning of LGN spike trains but revealed additional contributions, largely PULL suppression, to the LGN response. Using this novel combination of recordings and modeling, we were thus able to dissect multiple circuit contributions to LGN temporal responses across retina and LGN, and set the foundation for targeted study of each stage. Copyright © 2016 the American Physiological Society.

  2. Computational search for hypotheses concerning the endocannabinoid contribution to the extinction of fear conditioning.

    PubMed

    Anastasio, Thomas J

    2013-01-01

    Fear conditioning, in which a cue is conditioned to elicit a fear response, and extinction, in which a previously conditioned cue no longer elicits a fear response, depend on neural plasticity occurring within the amygdala. Projection neurons in the basolateral amygdala (BLA) learn to respond to the cue during fear conditioning, and they mediate fear responding by transferring cue signals to the output stage of the amygdala. Some BLA projection neurons retain their cue responses after extinction. Recent work shows that activation of the endocannabinoid system is necessary for extinction, and it leads to long-term depression (LTD) of the GABAergic synapses that inhibitory interneurons make onto BLA projection neurons. Such GABAergic LTD would enhance the responses of the BLA projection neurons that mediate fear responding, so it would seem to oppose, rather than promote, extinction. To address this paradox, a computational analysis of two well-known conceptual models of amygdaloid plasticity was undertaken. The analysis employed exhaustive state-space search conducted within a declarative programming environment. The analysis reveals that GABAergic LTD actually increases the number of synaptic strength configurations that achieve extinction while preserving the cue responses of some BLA projection neurons in both models. The results suggest that GABAergic LTD helps the amygdala retain cue memory during extinction even as the amygdala learns to suppress the previously conditioned response. The analysis also reveals which features of both models are essential for their ability to achieve extinction with some cue memory preservation, and suggests experimental tests of those features.

  3. Computational search for hypotheses concerning the endocannabinoid contribution to the extinction of fear conditioning

    PubMed Central

    Anastasio, Thomas J.

    2013-01-01

    Fear conditioning, in which a cue is conditioned to elicit a fear response, and extinction, in which a previously conditioned cue no longer elicits a fear response, depend on neural plasticity occurring within the amygdala. Projection neurons in the basolateral amygdala (BLA) learn to respond to the cue during fear conditioning, and they mediate fear responding by transferring cue signals to the output stage of the amygdala. Some BLA projection neurons retain their cue responses after extinction. Recent work shows that activation of the endocannabinoid system is necessary for extinction, and it leads to long-term depression (LTD) of the GABAergic synapses that inhibitory interneurons make onto BLA projection neurons. Such GABAergic LTD would enhance the responses of the BLA projection neurons that mediate fear responding, so it would seem to oppose, rather than promote, extinction. To address this paradox, a computational analysis of two well-known conceptual models of amygdaloid plasticity was undertaken. The analysis employed exhaustive state-space search conducted within a declarative programming environment. The analysis reveals that GABAergic LTD actually increases the number of synaptic strength configurations that achieve extinction while preserving the cue responses of some BLA projection neurons in both models. The results suggest that GABAergic LTD helps the amygdala retain cue memory during extinction even as the amygdala learns to suppress the previously conditioned response. The analysis also reveals which features of both models are essential for their ability to achieve extinction with some cue memory preservation, and suggests experimental tests of those features. PMID:23761759

  4. Optimized statistical parametric mapping procedure for NIRS data contaminated by motion artifacts : Neurometric analysis of body schema extension.

    PubMed

    Suzuki, Satoshi

    2017-09-01

    This study investigated the spatial distribution of brain activity on body schema (BS) modification induced by natural body motion using two versions of a hand-tracing task. In Task 1, participants traced Japanese Hiragana characters using the right forefinger, requiring no BS expansion. In Task 2, participants performed the tracing task with a long stick, requiring BS expansion. Spatial distribution was analyzed using general linear model (GLM)-based statistical parametric mapping of near-infrared spectroscopy data contaminated with motion artifacts caused by the hand-tracing task. Three methods were utilized in series to counter the artifacts, and optimal conditions and modifications were investigated: a model-free method (Step 1), a convolution matrix method (Step 2), and a boxcar-function-based Gaussian convolution method (Step 3). The results revealed four methodological findings: (1) Deoxyhemoglobin was suitable for the GLM because both Akaike information criterion and the variance against the averaged hemodynamic response function were smaller than for other signals, (2) a high-pass filter with a cutoff frequency of .014 Hz was effective, (3) the hemodynamic response function computed from a Gaussian kernel function and its first- and second-derivative terms should be included in the GLM model, and (4) correction of non-autocorrelation and use of effective degrees of freedom were critical. Investigating z-maps computed according to these guidelines revealed that contiguous areas of BA7-BA40-BA21 in the right hemisphere became significantly activated ([Formula: see text], [Formula: see text], and [Formula: see text], respectively) during BS modification while performing the hand-tracing task.

  5. Monte Carlo based investigation of berry phase for depth resolved characterization of biomedical scattering samples

    NASA Astrophysics Data System (ADS)

    Baba, J. S.; Koju, V.; John, D.

    2015-03-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  6. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models

    PubMed Central

    Bilgin, Cemal Cagatay; Fontenay, Gerald; Cheng, Qingsu; Chang, Hang; Han, Ju; Parvin, Bahram

    2016-01-01

    BioSig3D is a computational platform for high-content screening of three-dimensional (3D) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i) morphogenesis of a panel of human mammary epithelial cell lines (HMEC), and (ii) heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation. PMID:26978075

  7. Monte Carlo based investigation of Berry phase for depth resolved characterization of biomedical scattering samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, Justin S; John, Dwayne O; Koju, Vijay

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less

  8. Computational Identification of Genomic Features That Influence 3D Chromatin Domain Formation.

    PubMed

    Mourad, Raphaël; Cuvier, Olivier

    2016-05-01

    Recent advances in long-range Hi-C contact mapping have revealed the importance of the 3D structure of chromosomes in gene expression. A current challenge is to identify the key molecular drivers of this 3D structure. Several genomic features, such as architectural proteins and functional elements, were shown to be enriched at topological domain borders using classical enrichment tests. Here we propose multiple logistic regression to identify those genomic features that positively or negatively influence domain border establishment or maintenance. The model is flexible, and can account for statistical interactions among multiple genomic features. Using both simulated and real data, we show that our model outperforms enrichment test and non-parametric models, such as random forests, for the identification of genomic features that influence domain borders. Using Drosophila Hi-C data at a very high resolution of 1 kb, our model suggests that, among architectural proteins, BEAF-32 and CP190 are the main positive drivers of 3D domain borders. In humans, our model identifies well-known architectural proteins CTCF and cohesin, as well as ZNF143 and Polycomb group proteins as positive drivers of domain borders. The model also reveals the existence of several negative drivers that counteract the presence of domain borders including P300, RXRA, BCL11A and ELK1.

  9. Computational Identification of Genomic Features That Influence 3D Chromatin Domain Formation

    PubMed Central

    Mourad, Raphaël; Cuvier, Olivier

    2016-01-01

    Recent advances in long-range Hi-C contact mapping have revealed the importance of the 3D structure of chromosomes in gene expression. A current challenge is to identify the key molecular drivers of this 3D structure. Several genomic features, such as architectural proteins and functional elements, were shown to be enriched at topological domain borders using classical enrichment tests. Here we propose multiple logistic regression to identify those genomic features that positively or negatively influence domain border establishment or maintenance. The model is flexible, and can account for statistical interactions among multiple genomic features. Using both simulated and real data, we show that our model outperforms enrichment test and non-parametric models, such as random forests, for the identification of genomic features that influence domain borders. Using Drosophila Hi-C data at a very high resolution of 1 kb, our model suggests that, among architectural proteins, BEAF-32 and CP190 are the main positive drivers of 3D domain borders. In humans, our model identifies well-known architectural proteins CTCF and cohesin, as well as ZNF143 and Polycomb group proteins as positive drivers of domain borders. The model also reveals the existence of several negative drivers that counteract the presence of domain borders including P300, RXRA, BCL11A and ELK1. PMID:27203237

  10. A model of the temporal dynamics of multisensory enhancement

    PubMed Central

    Rowland, Benjamin A.; Stein, Barry E.

    2014-01-01

    The senses transduce different forms of environmental energy, and the brain synthesizes information across them to enhance responses to salient biological events. We hypothesize that the potency of multisensory integration is attributable to the convergence of independent and temporally aligned signals derived from cross-modal stimulus configurations onto multisensory neurons. The temporal profile of multisensory integration in neurons of the deep superior colliculus (SC) is consistent with this hypothesis. The responses of these neurons to visual, auditory, and combinations of visual–auditory stimuli reveal that multisensory integration takes place in real-time; that is, the input signals are integrated as soon as they arrive at the target neuron. Interactions between cross-modal signals may appear to reflect linear or nonlinear computations on a moment-by-moment basis, the aggregate of which determines the net product of multisensory integration. Modeling observations presented here suggest that the early nonlinear components of the temporal profile of multisensory integration can be explained with a simple spiking neuron model, and do not require more sophisticated assumptions about the underlying biology. A transition from nonlinear “super-additive” computation to linear, additive computation can be accomplished via scaled inhibition. The findings provide a set of design constraints for artificial implementations seeking to exploit the basic principles and potency of biological multisensory integration in contexts of sensory substitution or augmentation. PMID:24374382

  11. Reward salience and risk aversion underlie differential ACC activity in substance dependence

    PubMed Central

    Alexander, William H.; Fukunaga, Rena; Finn, Peter; Brown, Joshua W.

    2015-01-01

    The medial prefrontal cortex, especially the dorsal anterior cingulate cortex (ACC), has long been implicated in cognitive control and error processing. Although the association between ACC and behavior has been established, it is less clear how ACC contributes to dysfunctional behavior such as substance dependence. Evidence from neuroimaging studies investigating ACC function in substance users is mixed, with some studies showing disengagement of ACC in substance dependent individuals (SDs), while others show increased ACC activity related to substance use. In this study, we investigate ACC function in SDs and healthy individuals performing a change signal task for monetary rewards. Using a priori predictions derived from a recent computational model of ACC, we find that ACC activity differs between SDs and controls in factors related to reward salience and risk aversion between SDs and healthy individuals. Quantitative fits of a computational model to fMRI data reveal significant differences in best fit parameters for reward salience and risk preferences. Specifically, the ACC in SDs shows greater risk aversion, defined as concavity in the utility function, and greater attention to rewards relative to reward omission. Furthermore, across participants risk aversion and reward salience are positively correlated. The results clarify the role that ACC plays in both the reduced sensitivity to omitted rewards and greater reward valuation in SDs. Clinical implications of applying computational modeling in psychiatry are also discussed. PMID:26106528

  12. Reward salience and risk aversion underlie differential ACC activity in substance dependence.

    PubMed

    Alexander, William H; Fukunaga, Rena; Finn, Peter; Brown, Joshua W

    2015-01-01

    The medial prefrontal cortex, especially the dorsal anterior cingulate cortex (ACC), has long been implicated in cognitive control and error processing. Although the association between ACC and behavior has been established, it is less clear how ACC contributes to dysfunctional behavior such as substance dependence. Evidence from neuroimaging studies investigating ACC function in substance users is mixed, with some studies showing disengagement of ACC in substance dependent individuals (SDs), while others show increased ACC activity related to substance use. In this study, we investigate ACC function in SDs and healthy individuals performing a change signal task for monetary rewards. Using a priori predictions derived from a recent computational model of ACC, we find that ACC activity differs between SDs and controls in factors related to reward salience and risk aversion between SDs and healthy individuals. Quantitative fits of a computational model to fMRI data reveal significant differences in best fit parameters for reward salience and risk preferences. Specifically, the ACC in SDs shows greater risk aversion, defined as concavity in the utility function, and greater attention to rewards relative to reward omission. Furthermore, across participants risk aversion and reward salience are positively correlated. The results clarify the role that ACC plays in both the reduced sensitivity to omitted rewards and greater reward valuation in SDs. Clinical implications of applying computational modeling in psychiatry are also discussed.

  13. Combining H/D exchange mass spectroscopy and computational docking reveals extended DNA-binding surface on uracil-DNA glycosylase

    PubMed Central

    Roberts, Victoria A.; Pique, Michael E.; Hsu, Simon; Li, Sheng; Slupphaug, Geir; Rambo, Robert P.; Jamison, Jonathan W.; Liu, Tong; Lee, Jun H.; Tainer, John A.; Ten Eyck, Lynn F.; Woods, Virgil L.

    2012-01-01

    X-ray crystallography provides excellent structural data on protein–DNA interfaces, but crystallographic complexes typically contain only small fragments of large DNA molecules. We present a new approach that can use longer DNA substrates and reveal new protein–DNA interactions even in extensively studied systems. Our approach combines rigid-body computational docking with hydrogen/deuterium exchange mass spectrometry (DXMS). DXMS identifies solvent-exposed protein surfaces; docking is used to create a 3-dimensional model of the protein–DNA interaction. We investigated the enzyme uracil-DNA glycosylase (UNG), which detects and cleaves uracil from DNA. UNG was incubated with a 30 bp DNA fragment containing a single uracil, giving the complex with the abasic DNA product. Compared with free UNG, the UNG–DNA complex showed increased solvent protection at the UNG active site and at two regions outside the active site: residues 210–220 and 251–264. Computational docking also identified these two DNA-binding surfaces, but neither shows DNA contact in UNG–DNA crystallographic structures. Our results can be explained by separation of the two DNA strands on one side of the active site. These non-sequence-specific DNA-binding surfaces may aid local uracil search, contribute to binding the abasic DNA product and help present the DNA product to APE-1, the next enzyme on the DNA-repair pathway. PMID:22492624

  14. Schizophrenia interactome with 504 novel protein–protein interactions

    PubMed Central

    Ganapathiraju, Madhavi K; Thahir, Mohamed; Handen, Adam; Sarkar, Saumendra N; Sweet, Robert A; Nimgaonkar, Vishwajit L; Loscher, Christine E; Bauer, Eileen M; Chaparala, Srilakshmi

    2016-01-01

    Genome-wide association studies of schizophrenia (GWAS) have revealed the role of rare and common genetic variants, but the functional effects of the risk variants remain to be understood. Protein interactome-based studies can facilitate the study of molecular mechanisms by which the risk genes relate to schizophrenia (SZ) genesis, but protein–protein interactions (PPIs) are unknown for many of the liability genes. We developed a computational model to discover PPIs, which is found to be highly accurate according to computational evaluations and experimental validations of selected PPIs. We present here, 365 novel PPIs of liability genes identified by the SZ Working Group of the Psychiatric Genomics Consortium (PGC). Seventeen genes that had no previously known interactions have 57 novel interactions by our method. Among the new interactors are 19 drug targets that are targeted by 130 drugs. In addition, we computed 147 novel PPIs of 25 candidate genes investigated in the pre-GWAS era. While there is little overlap between the GWAS genes and the pre-GWAS genes, the interactomes reveal that they largely belong to the same pathways, thus reconciling the apparent disparities between the GWAS and prior gene association studies. The interactome including 504 novel PPIs overall, could motivate other systems biology studies and trials with repurposed drugs. The PPIs are made available on a webserver, called Schizo-Pi at http://severus.dbmi.pitt.edu/schizo-pi with advanced search capabilities. PMID:27336055

  15. Adaptive non-interventional heuristics for covariation detection in causal induction: model comparison and rational analysis.

    PubMed

    Hattori, Masasi; Oaksford, Mike

    2007-09-10

    In this article, 41 models of covariation detection from 2 × 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in covariation detection (McKenzie & Mikkelsen, 2007) and data selection (Hattori, 2002; Oaksford & Chater, 1994, 2003). The results were supportive of the new model. To investigate its explanatory adequacy, a rational analysis using two computer simulations was conducted. These simulations revealed the environmental conditions and the memory restrictions under which the new model best approximates the normative model of covariation detection in these tasks. They thus demonstrated the adaptive rationality of the new model. 2007 Cognitive Science Society, Inc.

  16. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  17. Climate Analytics as a Service. Chapter 11

    NASA Technical Reports Server (NTRS)

    Schnase, John L.

    2016-01-01

    Exascale computing, big data, and cloud computing are driving the evolution of large-scale information systems toward a model of data-proximal analysis. In response, we are developing a concept of climate analytics as a service (CAaaS) that represents a convergence of data analytics and archive management. With this approach, high-performance compute-storage implemented as an analytic system is part of a dynamic archive comprising both static and computationally realized objects. It is a system whose capabilities are framed as behaviors over a static data collection, but where queries cause results to be created, not found and retrieved. Those results can be the product of a complex analysis, but, importantly, they also can be tailored responses to the simplest of requests. NASA's MERRA Analytic Service and associated Climate Data Services API provide a real-world example of climate analytics delivered as a service in this way. Our experiences reveal several advantages to this approach, not the least of which is orders-of-magnitude time reduction in the data assembly task common to many scientific workflows.

  18. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  19. Biomechanical effects of mobile computer location in a vehicle cab.

    PubMed

    Saginus, Kyle A; Marklin, Richard W; Seeley, Patricia; Simoneau, Guy G; Freier, Stephen

    2011-10-01

    The objective of this research is to determine the best location to place a conventional mobile computer supported by a commercially available mount in a light truck cab. U.S. and Canadian electric utility companies are in the process of integrating mobile computers into their fleet vehicle cabs. There are no publications on the effect of mobile computer location in a vehicle cab on biomechanical loading, performance, and subjective assessment. The authors tested four locations of mobile computers in a light truck cab in a laboratory study to determine how location affected muscle activity of the lower back and shoulders; joint angles of the shoulders, elbows, and wrist; user performance; and subjective assessment. A total of 22 participants were tested in this study. Placing the mobile computer closer to the steering wheel reduced low back and shoulder muscle activity. Joint angles of the shoulders, elbows, and wrists were also closer to neutral angle. Biomechanical modeling revealed substantially less spinal compression and trunk muscle force. In general, there were no practical differences in performance between the locations. Subjective assessment indicated that users preferred the mobile computer to be as close as possible to the steering wheel. Locating the mobile computer close to the steering wheel reduces risk of injuries, such as low back pain and shoulder tendonitis. Results from the study can guide electric utility companies in the installation of mobile computers into vehicle cabs. Results may also be generalized to other industries that use trucklike vehicles, such as construction.

  20. Composite operators in cubic field theories and link-overlap fluctuations in spin-glass models

    NASA Astrophysics Data System (ADS)

    Altieri, Ada; Parisi, Giorgio; Rizzo, Tommaso

    2016-01-01

    We present a complete characterization of the fluctuations and correlations of the squared overlap in the Edwards-Anderson spin-glass model in zero field. The analysis reveals that the energy-energy correlation (and thus the specific heat) has a different critical behavior than the fluctuations of the link overlap in spite of the fact that the average energy and average link overlap have the same critical properties. More precisely the link-overlap fluctuations are larger than the specific heat according to a computation at first order in the 6 -ɛ expansion. An unexpected outcome is that the link-overlap fluctuations have a subdominant power-law contribution characterized by an anomalous logarithmic prefactor which is absent in the specific heat. In order to compute the ɛ expansion we consider the problem of the renormalization of quadratic composite operators in a generic multicomponent cubic field theory: the results obtained have a range of applicability beyond spin-glass theory.

  1. Computational procedures for probing interactions in OLS and logistic regression: SPSS and SAS implementations.

    PubMed

    Hayes, Andrew F; Matthes, Jörg

    2009-08-01

    Researchers often hypothesize moderated effects, in which the effect of an independent variable on an outcome variable depends on the value of a moderator variable. Such an effect reveals itself statistically as an interaction between the independent and moderator variables in a model of the outcome variable. When an interaction is found, it is important to probe the interaction, for theories and hypotheses often predict not just interaction but a specific pattern of effects of the focal independent variable as a function of the moderator. This article describes the familiar pick-a-point approach and the much less familiar Johnson-Neyman technique for probing interactions in linear models and introduces macros for SPSS and SAS to simplify the computations and facilitate the probing of interactions in ordinary least squares and logistic regression. A script version of the SPSS macro is also available for users who prefer a point-and-click user interface rather than command syntax.

  2. Computational substrates of social norm enforcement by unaffected third parties.

    PubMed

    Zhong, Songfa; Chark, Robin; Hsu, Ming; Chew, Soo Hong

    2016-04-01

    Enforcement of social norms by impartial bystanders in the human species reveals a possibly unique capacity to sense and to enforce norms from a third party perspective. Such behavior, however, cannot be accounted by current computational models based on an egocentric notion of norms. Here, using a combination of model-based fMRI and third party punishment games, we show that brain regions previously implicated in egocentric norm enforcement critically extend to the important case of norm enforcement by unaffected third parties. Specifically, we found that responses in the ACC and insula cortex were positively associated with detection of distributional inequity, while those in the anterior DLPFC were associated with assessment of intentionality to the violator. Moreover, during sanction decisions, the subjective value of sanctions modulated activity in both vmPFC and rTPJ. These results shed light on the neurocomputational underpinnings of third party punishment and evolutionary origin of human norm enforcement. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Decoding the Regulatory Network for Blood Development from Single-Cell Gene Expression Measurements

    PubMed Central

    Haghverdi, Laleh; Lilly, Andrew J.; Tanaka, Yosuke; Wilkinson, Adam C.; Buettner, Florian; Macaulay, Iain C.; Jawaid, Wajid; Diamanti, Evangelia; Nishikawa, Shin-Ichi; Piterman, Nir; Kouskoff, Valerie; Theis, Fabian J.; Fisher, Jasmin; Göttgens, Berthold

    2015-01-01

    Here we report the use of diffusion maps and network synthesis from state transition graphs to better understand developmental pathways from single cell gene expression profiling. We map the progression of mesoderm towards blood in the mouse by single-cell expression analysis of 3,934 cells, capturing cells with blood-forming potential at four sequential developmental stages. By adapting the diffusion plot methodology for dimensionality reduction to single-cell data, we reconstruct the developmental journey to blood at single-cell resolution. Using transitions between individual cellular states as input, we develop a single-cell network synthesis toolkit to generate a computationally executable transcriptional regulatory network model that recapitulates blood development. Model predictions were validated by showing that Sox7 inhibits primitive erythropoiesis, and that Sox and Hox factors control early expression of Erg. We therefore demonstrate that single-cell analysis of a developing organ coupled with computational approaches can reveal the transcriptional programs that control organogenesis. PMID:25664528

  4. Modeling Political Populations with Bacteria

    NASA Astrophysics Data System (ADS)

    Cleveland, Chris; Liao, David

    2011-03-01

    Results from lattice-based simulations of micro-environments with heterogeneous nutrient resources reveal that competition between wild-type and GASP rpoS819 strains of E. Coli offers mutual benefit, particularly in nutrient deprived regions. Our computational model spatially maps bacteria populations and energy sources onto a set of 3D lattices that collectively resemble the topology of North America. By implementing Wright-Fishcer re- production into a probabilistic leap-frog scheme, we observe populations of wild-type and GASP rpoS819 cells compete for resources and, yet, aid each other's long term survival. The connection to how spatial political ideologies map in a similar way is discussed.

  5. Surface electromyogram for the control of anthropomorphic teleoperator fingers.

    PubMed

    Gupta, V; Reddy, N P

    1996-01-01

    Growing importance of telesurgery has led to the need for the development of synergistic control of anthropomorphic teleoperators. Synergistic systems can be developed using direct biological control. The purpose of this study was to develop techniques for direct biocontrol of anthropomorphic teleoperators using surface electromyogram (EMG). A computer model of a two finger teleoperator was developed and controlled using surface EMG from the flexor digitorum superficialis during flexion-extension of the index finger. The results of the study revealed a linear relationship between the RMS EMG and the flexion-extension of the finger model. Therefore, surface EMG can be used as a direct biocontrol for teleoperators and in VR applications.

  6. (Extreme) Core-collapse Supernova Simulations

    NASA Astrophysics Data System (ADS)

    Mösta, Philipp

    2017-01-01

    In this talk I will present recent progress on modeling core-collapse supernovae with massively parallel simulations on the largest supercomputers available. I will discuss the unique challenges in both input physics and computational modeling that come with a problem involving all four fundamental forces and relativistic effects and will highlight recent breakthroughs overcoming these challenges in full 3D simulations. I will pay particular attention to how these simulations can be used to reveal the engines driving some of the most extreme explosions and conclude by discussing what remains to be done in simulation work to maximize what we can learn from current and future time-domain astronomy transient surveys.

  7. Epidemics and dimensionality in hierarchical networks

    NASA Astrophysics Data System (ADS)

    Zheng, Da-Fang; Hui, P. M.; Trimper, Steffen; Zheng, Bo

    2005-07-01

    Epidemiological processes are studied within a recently proposed hierarchical network model using the susceptible-infected-refractory dynamics of an epidemic. Within the network model, a population may be characterized by H independent hierarchies or dimensions, each of which consists of groupings of individuals into layers of subgroups. Detailed numerical simulations reveal that for H>1, global spreading results regardless of the degree of homophily of the individuals forming a social circle. For H=1, a transition from global to local spread occurs as the population becomes decomposed into increasingly homophilous groups. Multiple dimensions in classifying individuals (nodes) thus make a society (computer network) highly susceptible to large-scale outbreaks of infectious diseases (viruses).

  8. Macro- and micro-chaotic structures in the Hindmarsh-Rose model of bursting neurons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrio, Roberto, E-mail: rbarrio@unizar.es; Serrano, Sergio; Angeles Martínez, M.

    2014-06-01

    We study a plethora of chaotic phenomena in the Hindmarsh-Rose neuron model with the use of several computational techniques including the bifurcation parameter continuation, spike-quantification, and evaluation of Lyapunov exponents in bi-parameter diagrams. Such an aggregated approach allows for detecting regions of simple and chaotic dynamics, and demarcating borderlines—exact bifurcation curves. We demonstrate how the organizing centers—points corresponding to codimension-two homoclinic bifurcations—along with fold and period-doubling bifurcation curves structure the biparametric plane, thus forming macro-chaotic regions of onion bulb shapes and revealing spike-adding cascades that generate micro-chaotic structures due to the hysteresis.

  9. Foundations of anticipatory logic in biology and physics.

    PubMed

    Bettinger, Jesse S; Eastman, Timothy E

    2017-12-01

    Recent advances in modern physics and biology reveal several scenarios in which top-down effects (Ellis, 2016) and anticipatory systems (Rosen, 1980) indicate processes at work enabling active modeling and inference such that anticipated effects project onto potential causes. We extrapolate a broad landscape of anticipatory systems in the natural sciences extending to computational neuroscience of perception in the capacity of Bayesian inferential models of predictive processing. This line of reasoning also comes with philosophical foundations, which we develop in terms of counterfactual reasoning and possibility space, Whitehead's process thought, and correlations with Eastern wisdom traditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. State estimation of stochastic non-linear hybrid dynamic system using an interacting multiple model algorithm.

    PubMed

    Elenchezhiyan, M; Prakash, J

    2015-09-01

    In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Mechanism of Kinetically Controlled Capillary Condensation in Nanopores: A Combined Experimental and Monte Carlo Approach.

    PubMed

    Hiratsuka, Tatsumasa; Tanaka, Hideki; Miyahara, Minoru T

    2017-01-24

    We find the rule of capillary condensation from the metastable state in nanoscale pores based on the transition state theory. The conventional thermodynamic theories cannot achieve it because the metastable capillary condensation inherently includes an activated process. We thus compute argon adsorption isotherms on cylindrical pore models and atomistic silica pore models mimicking the MCM-41 materials by the grand canonical Monte Carlo and the gauge cell Monte Carlo methods and evaluate the rate constant for the capillary condensation by the transition state theory. The results reveal that the rate drastically increases with a small increase in the chemical potential of the system, and the metastable capillary condensation occurs for any mesopores when the rate constant reaches a universal critical value. Furthermore, a careful comparison between experimental adsorption isotherms and the simulated ones on the atomistic silica pore models reveals that the rate constant of the real system also has a universal value. With this finding, we can successfully estimate the experimental capillary condensation pressure over a wide range of temperatures and pore sizes by simply applying the critical rate constant.

  12. Autotransplantation of immature third molars using a computer-aided rapid prototyping model: a report of 4 cases.

    PubMed

    Jang, Ji-Hyun; Lee, Seung-Jong; Kim, Euiseong

    2013-11-01

    Autotransplantation of immature teeth can be an option for premature tooth loss in young patients as an alternative to immediately replacing teeth with fixed or implant-supported prostheses. The present case series reports 4 successful autotransplantation cases using computer-aided rapid prototyping (CARP) models with immature third molars. The compromised upper and lower molars (n = 4) of patients aged 15-21 years old were transplanted with third molars using CARP models. Postoperatively, the pulp vitality and the development of the roots were examined clinically and radiographically. The patient follow-up period was 2-7.5 years after surgery. The long-term follow-up showed that all of the transplants were asymptomatic and functional. Radiographic examination indicated that the apices developed continuously and the root length and thickness increased. The final follow-up examination revealed that all of the transplants kept the vitality, and the apices were fully developed with normal periodontal ligaments and trabecular bony patterns. Based on long-term follow-up observations, our 4 cases of autotransplantation of immature teeth using CARP models resulted in favorable prognoses. The CARP model assisted in minimizing the extraoral time and the possible Hertwig epithelial root sheath injury of the transplanted tooth. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  13. Molecular Dynamics based on a Generalized Born solvation model: application to protein folding

    NASA Astrophysics Data System (ADS)

    Onufriev, Alexey

    2004-03-01

    An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.

  14. Moho Modeling Using FFT Technique

    NASA Astrophysics Data System (ADS)

    Chen, Wenjin; Tenzer, Robert

    2017-04-01

    To improve the numerical efficiency, the Fast Fourier Transform (FFT) technique was facilitated in Parker-Oldenburg's method for a regional gravimetric Moho recovery, which assumes the Earth's planar approximation. In this study, we extend this definition for global applications while assuming a spherical approximation of the Earth. In particular, we utilize the FFT technique for a global Moho recovery, which is practically realized in two numerical steps. The gravimetric forward modeling is first applied, based on methods for a spherical harmonic analysis and synthesis of the global gravity and lithospheric structure models, to compute the refined gravity field, which comprises mainly the gravitational signature of the Moho geometry. The gravimetric inverse problem is then solved iteratively in order to determine the Moho depth. The application of FFT technique to both numerical steps reduces the computation time to a fraction of that required without applying this fast algorithm. The developed numerical producers are used to estimate the Moho depth globally, and the gravimetric result is validated using the global (CRUST1.0) and regional (ESC) seismic Moho models. The comparison reveals a relatively good agreement between the gravimetric and seismic models, with the RMS of differences (of 4-5 km) at the level of expected uncertainties of used input datasets, while without the presence of significant systematic bias.

  15. Toward Better Modeling of Supercritical Turbulent Mixing

    NASA Technical Reports Server (NTRS)

    Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth

    2008-01-01

    study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.

  16. Automated analysis of biological oscillator models using mode decomposition.

    PubMed

    Konopka, Tomasz

    2011-04-01

    Oscillating signals produced by biological systems have shapes, described by their Fourier spectra, that can potentially reveal the mechanisms that generate them. Extracting this information from measured signals is interesting for the validation of theoretical models, discovery and classification of interaction types, and for optimal experiment design. An automated workflow is described for the analysis of oscillating signals. A software package is developed to match signal shapes to hundreds of a priori viable model structures defined by a class of first-order differential equations. The package computes parameter values for each model by exploiting the mode decomposition of oscillating signals and formulating the matching problem in terms of systems of simultaneous polynomial equations. On the basis of the computed parameter values, the software returns a list of models consistent with the data. In validation tests with synthetic datasets, it not only shortlists those model structures used to generate the data but also shows that excellent fits can sometimes be achieved with alternative equations. The listing of all consistent equations is indicative of how further invalidation might be achieved with additional information. When applied to data from a microarray experiment on mice, the procedure finds several candidate model structures to describe interactions related to the circadian rhythm. This shows that experimental data on oscillators is indeed rich in information about gene regulation mechanisms. The software package is available at http://babylone.ulb.ac.be/autoosc/.

  17. A sediment resuspension and water quality model of Lake Okeechobee

    USGS Publications Warehouse

    James, R.T.; Martin, J.; Wool, T.; Wang, P.-F.

    1997-01-01

    The influence of sediment resuspension on the water quality of shallow lakes is well documented. However, a search of the literature reveals no deterministic mass-balance eutrophication models that explicitly include resuspension. We modified the Lake Okeeehobee water quality model - which uses the Water Analysis Simulation Package (WASP) to simulate algal dynamics and phosphorus, nitrogen, and oxygen cycles - to include inorganic suspended solids and algorithms that: (1) define changes in depth with changes in volume; (2) compute sediment resuspension based on bottom shear stress; (3) compute partition coefficients for ammonia and ortho-phosphorus to solids; and (4) relate light attenuation to solids concentrations. The model calibration and validation were successful with the exception of dissolved inorganic nitrogen species which did not correspond well to observed data in the validation phase. This could be attributed to an inaccurate formulation of algal nitrogen preference and/or the absence of nitrogen fixation in the model. The model correctly predicted that the lake is lightlimited from resuspended solids, and algae are primarily nitrogen limited. The model simulation suggested that biological fluxes greatly exceed external loads of dissolved nutrients; and sedimentwater interactions of organic nitrogen and phosphorus far exceed external loads. A sensitivity analysis demonstrated that parameters affecting resuspension, settling, sediment nutrient and solids concentrations, mineralization, algal productivity, and algal stoichiometry are factors requiring further study to improve our understanding of the Lake Okeechobee ecosystem.

  18. A mixed SIR-SIS model to contain a virus spreading through networks with two degrees

    NASA Astrophysics Data System (ADS)

    Essouifi, Mohamed; Achahbar, Abdelfattah

    Due to the fact that the “nodes” and “links” of real networks are heterogeneous, to model computer viruses prevalence throughout the Internet, we borrow the idea of the reduced scale free network which was introduced recently. The purpose of this paper is to extend the previous deterministic two subchains of Susceptible-Infected-Susceptible (SIS) model into a mixed Susceptible-Infected-Recovered and Susceptible-Infected-Susceptible (SIR-SIS) model to contain the computer virus spreading over networks with two degrees. Moreover, we develop its stochastic counterpart. Due to the high protection and security taken for hubs class, we suggest to treat it by using SIR epidemic model rather than the SIS one. The analytical study reveals that the proposed model admits a stable viral equilibrium. Thus, it is shown numerically that the mean dynamic behavior of the stochastic model is in agreement with the deterministic one. Unlike the infection densities i2 and i which both tend to a viral equilibrium for both approaches as in the previous study, i1 tends to the virus-free equilibrium. Furthermore, since a proportion of infectives are recovered, the global infection density i is minimized. Therefore, the permanent presence of viruses in the network due to the lower-degree nodes class. Many suggestions are put forward for containing viruses propagation and minimizing their damages.

  19. Population of computational rabbit-specific ventricular action potential models for investigating sources of variability in cellular repolarisation.

    PubMed

    Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T Alexander

    2014-01-01

    Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.

  20. Are metastases from metastases clinical relevant? Computer modelling of cancer spread in a case of hepatocellular carcinoma.

    PubMed

    Bethge, Anja; Schumacher, Udo; Wree, Andreas; Wedemann, Gero

    2012-01-01

    Metastasis formation remains an enigmatic process and one of the main questions recently asked is whether metastases are able to generate further metastases. Different models have been proposed to answer this question; however, their clinical significance remains unclear. Therefore a computer model was developed that permits comparison of the different models quantitatively with clinical data and that additionally predicts the outcome of treatment interventions. The computer model is based on discrete events simulation approach. On the basis of a case from an untreated patient with hepatocellular carcinoma and its multiple metastases in the liver, it was evaluated whether metastases are able to metastasise and in particular if late disseminated tumour cells are still capable to form metastases. Additionally, the resection of the primary tumour was simulated. The simulation results were compared with clinical data. The simulation results reveal that the number of metastases varies significantly between scenarios where metastases metastasise and scenarios where they do not. In contrast, the total tumour mass is nearly unaffected by the two different modes of metastasis formation. Furthermore, the results provide evidence that metastasis formation is an early event and that late disseminated tumour cells are still capable of forming metastases. Simulations also allow estimating how the resection of the primary tumour delays the patient's death. The simulation results indicate that for this particular case of a hepatocellular carcinoma late metastases, i.e., metastases from metastases, are irrelevant in terms of total tumour mass. Hence metastases seeded from metastases are clinically irrelevant in our model system. Only the first metastases seeded from the primary tumour contribute significantly to the tumour burden and thus cause the patient's death.

Top