Sample records for current computational models

  1. Multiscale Modeling in Computational Biomechanics: Determining Computational Priorities and Addressing Current Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.

    2009-05-01

    Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.

  2. Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices

    DOEpatents

    Gering, Kevin L

    2013-08-27

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

  3. Mirror neurons and imitation: a computationally guided review.

    PubMed

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael

    2006-04-01

    Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.

  4. Computational Aeroelastic Modeling of Airframes and TurboMachinery: Progress and Challenges

    NASA Technical Reports Server (NTRS)

    Bartels, R. E.; Sayma, A. I.

    2006-01-01

    Computational analyses such as computational fluid dynamics and computational structural dynamics have made major advances toward maturity as engineering tools. Computational aeroelasticity is the integration of these disciplines. As computational aeroelasticity matures it too finds an increasing role in the design and analysis of aerospace vehicles. This paper presents a survey of the current state of computational aeroelasticity with a discussion of recent research, success and continuing challenges in its progressive integration into multidisciplinary aerospace design. This paper approaches computational aeroelasticity from the perspective of the two main areas of application: airframe and turbomachinery design. An overview will be presented of the different prediction methods used for each field of application. Differing levels of nonlinear modeling will be discussed with insight into accuracy versus complexity and computational requirements. Subjects will include current advanced methods (linear and nonlinear), nonlinear flow models, use of order reduction techniques and future trends in incorporating structural nonlinearity. Examples in which computational aeroelasticity is currently being integrated into the design of airframes and turbomachinery will be presented.

  5. Profile modification computations for LHCD experiments on PBX-M using the TSC/LSC model

    NASA Astrophysics Data System (ADS)

    Kaita, R.; Ignat, D. W.; Jardin, S. C.; Okabayashi, M.; Sun, Y. C.

    1996-02-01

    The TSC-LSC computational model of the dynamics of lower hybrid current drive has been exercised extensively in comparison with data from a Princeton Beta Experiment-Modification (PBX-M) discharge where the measured q(0) attained values slightly above unity. Several significant, but plausible, assumptions had to be introduced to keep the computation from behaving pathologically over time, producing singular profiles of plasma current density and q. Addition of a heuristic current diffusion estimate, or more exactly, a smoothing of the rf-driven current with a diffusion-like equation, greatly improved the behavior of the computation, and brought theory and measurement into reasonable agreement. The model was then extended to longer pulse lengths and higher powers to investigate performance to be expected in future PBX-M current profile modification experiments.

  6. Computational Analysis of Static and Dynamic Behaviour of Magnetic Suspensions and Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P. (Editor); Groom, Nelson J.

    1996-01-01

    Static modelling of magnetic bearings is often carried out using magnetic circuit theory. This theory cannot easily include nonlinear effects such as magnetic saturation or the fringing of flux in air-gaps. Modern computational tools are able to accurately model complex magnetic bearing geometries, provided some care is exercised. In magnetic suspension applications, the magnetic fields are highly three-dimensional and require computational tools for the solution of most problems of interest. The dynamics of a magnetic bearing or magnetic suspension system can be strongly affected by eddy currents. Eddy currents are present whenever a time-varying magnetic flux penetrates a conducting medium. The direction of flow of the eddy current is such as to reduce the rate-of-change of flux. Analytic solutions for eddy currents are available for some simplified geometries, but complex geometries must be solved by computation. It is only in recent years that such computations have been considered truly practical. At NASA Langley Research Center, state-of-the-art finite-element computer codes, 'OPERA', 'TOSCA' and 'ELEKTRA' have recently been installed and applied to the magnetostatic and eddy current problems. This paper reviews results of theoretical analyses which suggest general forms of mathematical models for eddy currents, together with computational results. A simplified circuit-based eddy current model proposed appears to predict the observed trends in the case of large eddy current circuits in conducting non-magnetic material. A much more difficult case is seen to be that of eddy currents in magnetic material, or in non-magnetic material at higher frequencies, due to the lower skin depths. Even here, the dissipative behavior has been shown to yield at least somewhat to linear modelling. Magnetostatic and eddy current computations have been carried out relating to the Annular Suspension and Pointing System, a prototype for a space payload pointing and vibration isolation system, where the magnetic actuator geometry resembles a conventional magnetic bearing. Magnetostatic computations provide estimates of flux density within airgaps and the iron core material, fringing at the pole faces and the net force generated. Eddy current computations provide coil inductance, power dissipation and the phase lag in the magnetic field, all as functions of excitation frequency. Here, the dynamics of the magnetic bearings, notably the rise time of forces with changing currents, are found to be very strongly affected by eddy currents, even at quite low frequencies. Results are also compared to experimental measurements of the performance of a large-gap magnetic suspension system, the Large Angle Magnetic Suspension Test Fixture (LAMSTF). Eddy current effects are again shown to significantly affect the dynamics of the system. Some consideration is given to the ease and accuracy of computation, specifically relating to OPERA/TOSCA/ELEKTRA.

  7. Computer Models of Personality: Implications for Measurement

    ERIC Educational Resources Information Center

    Cranton, P. A.

    1976-01-01

    Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…

  8. Non-invasive brain stimulation and computational models in post-stroke aphasic patients: single session of transcranial magnetic stimulation and transcranial direct current stimulation. A randomized clinical trial.

    PubMed

    Santos, Michele Devido Dos; Cavenaghi, Vitor Breseghello; Mac-Kay, Ana Paula Machado Goyano; Serafim, Vitor; Venturi, Alexandre; Truong, Dennis Quangvinh; Huang, Yu; Boggio, Paulo Sérgio; Fregni, Felipe; Simis, Marcel; Bikson, Marom; Gagliardi, Rubens José

    2017-01-01

    Patients undergoing the same neuromodulation protocol may present different responses. Computational models may help in understanding such differences. The aims of this study were, firstly, to compare the performance of aphasic patients in naming tasks before and after one session of transcranial direct current stimulation (tDCS), transcranial magnetic stimulation (TMS) and sham, and analyze the results between these neuromodulation techniques; and secondly, through computational model on the cortex and surrounding tissues, to assess current flow distribution and responses among patients who received tDCS and presented different levels of results from naming tasks. Prospective, descriptive, qualitative and quantitative, double blind, randomized and placebo-controlled study conducted at Faculdade de Ciências Médicas da Santa Casa de São Paulo. Patients with aphasia received one session of tDCS, TMS or sham stimulation. The time taken to name pictures and the response time were evaluated before and after neuromodulation. Selected patients from the first intervention underwent a computational model stimulation procedure that simulated tDCS. The results did not indicate any statistically significant differences from before to after the stimulation.The computational models showed different current flow distributions. The present study did not show any statistically significant difference between tDCS, TMS and sham stimulation regarding naming tasks. The patients'responses to the computational model showed different patterns of current distribution.

  9. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  10. Novel opportunities for computational biology and sociology in drug discovery☆

    PubMed Central

    Yao, Lixia; Evans, James A.; Rzhetsky, Andrey

    2013-01-01

    Current drug discovery is impossible without sophisticated modeling and computation. In this review we outline previous advances in computational biology and, by tracing the steps involved in pharmaceutical development, explore a range of novel, high-value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy–industry links for scientific and human benefit. Attention to these opportunities could promise punctuated advance and will complement the well-established computational work on which drug discovery currently relies. PMID:20349528

  11. Enduring Influence of Stereotypical Computer Science Role Models on Women's Academic Aspirations

    ERIC Educational Resources Information Center

    Cheryan, Sapna; Drury, Benjamin J.; Vichayapai, Marissa

    2013-01-01

    The current work examines whether a brief exposure to a computer science role model who fits stereotypes of computer scientists has a lasting influence on women's interest in the field. One-hundred undergraduate women who were not computer science majors met a female or male peer role model who embodied computer science stereotypes in appearance…

  12. Collective Computation of Neural Network

    DTIC Science & Technology

    1990-03-15

    Sciences, Beijing ABSTRACT Computational neuroscience is a new branch of neuroscience originating from current research on the theory of computer...scientists working in artificial intelligence engineering and neuroscience . The paper introduces the collective computational properties of model neural...vision research. On this basis, the authors analyzed the significance of the Hopfield model. Key phrases: Computational Neuroscience , Neural Network, Model

  13. High-resolution Modeling Assisted Design of Customized and Individualized Transcranial Direct Current Stimulation Protocols

    PubMed Central

    Bikson, Marom; Rahman, Asif; Datta, Abhishek; Fregni, Felipe; Merabet, Lotfi

    2012-01-01

    Objectives Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that delivers low-intensity currents facilitating or inhibiting spontaneous neuronal activity. tDCS is attractive since dose is readily adjustable by simply changing electrode number, position, size, shape, and current. In the recent past, computational models have been developed with increased precision with the goal to help customize tDCS dose. The aim of this review is to discuss the incorporation of high-resolution patient-specific computer modeling to guide and optimize tDCS. Methods In this review, we discuss the following topics: (i) The clinical motivation and rationale for models of transcranial stimulation is considered pivotal in order to leverage the flexibility of neuromodulation; (ii) The protocols and the workflow for developing high-resolution models; (iii) The technical challenges and limitations of interpreting modeling predictions, and (iv) Real cases merging modeling and clinical data illustrating the impact of computational models on the rational design of rehabilitative electrotherapy. Conclusions Though modeling for non-invasive brain stimulation is still in its development phase, it is predicted that with increased validation, dissemination, simplification and democratization of modeling tools, computational forward models of neuromodulation will become useful tools to guide the optimization of clinical electrotherapy. PMID:22780230

  14. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.

  15. Modeling of Photoionized Plasmas

    NASA Technical Reports Server (NTRS)

    Kallman, Timothy R.

    2010-01-01

    In this paper I review the motivation and current status of modeling of plasmas exposed to strong radiation fields, as it applies to the study of cosmic X-ray sources. This includes some of the astrophysical issues which can be addressed, the ingredients for the models, the current computational tools, the limitations imposed by currently available atomic data, and the validity of some of the standard assumptions. I will also discuss ideas for the future: challenges associated with future missions, opportunities presented by improved computers, and goals for atomic data collection.

  16. Ocean Modeling and Visualization on Massively Parallel Computer

    NASA Technical Reports Server (NTRS)

    Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.

    1997-01-01

    Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.

  17. Climate Ocean Modeling on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wang, P.; Cheng, B. N.; Chao, Y.

    1998-01-01

    Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.

  18. Bringing computational models of bone regeneration to the clinic.

    PubMed

    Carlier, Aurélie; Geris, Liesbet; Lammens, Johan; Van Oosterwyck, Hans

    2015-01-01

    Although the field of bone regeneration has experienced great advancements in the last decades, integrating all the relevant, patient-specific information into a personalized diagnosis and optimal treatment remains a challenging task due to the large number of variables that affect bone regeneration. Computational models have the potential to cope with this complexity and to improve the fundamental understanding of the bone regeneration processes as well as to predict and optimize the patient-specific treatment strategies. However, the current use of computational models in daily orthopedic practice is very limited or inexistent. We have identified three key hurdles that limit the translation of computational models of bone regeneration from bench to bed side. First, there exists a clear mismatch between the scope of the existing and the clinically required models. Second, most computational models are confronted with limited quantitative information of insufficient quality thereby hampering the determination of patient-specific parameter values. Third, current computational models are only corroborated with animal models, whereas a thorough (retrospective and prospective) assessment of the computational model will be crucial to convince the health care providers of the capabilities thereof. These challenges must be addressed so that computational models of bone regeneration can reach their true potential, resulting in the advancement of individualized care and reduction of the associated health care costs. © 2015 Wiley Periodicals, Inc.

  19. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  20. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1981-01-01

    Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.

  1. A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS

    EPA Science Inventory

    Fine-scale Computational Fluid Dynamics (CFD) simulation of pollutant concentrations within roadway and building microenvironments is feasible using high performance computing. Unlike currently used regulatory air quality models, fine-scale CFD simulations are able to account rig...

  2. Current and Future Development of a Non-hydrostatic Unified Atmospheric Model (NUMA)

    DTIC Science & Technology

    2010-09-09

    following capabilities: 1.  Highly scalable on current and future computer architectures ( exascale computing and beyond and GPUs) 2.  Flexibility... Exascale Computing •  10 of Top 500 are already in the Petascale range •  Should also keep our eyes on GPUs (e.g., Mare Nostrum) 2.  Numerical

  3. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.

  4. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  5. Earth's external magnetic fields at low orbital altitudes

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.

    1990-01-01

    Under our Jun. 1987 proposal, Magnetic Signatures of Near-Earth Distributed Currents, we proposed to render operational a modeling procedure that had been previously developed to compute the magnetic effects of distributed currents flowing in the magnetosphere-ionosphere system. After adaptation of the software to our computing environment we would apply the model to low altitude satellite orbits and would utilize the MAGSAT data suite to guide the analysis. During the first year, basic computer codes to run model systems of Birkeland and ionospheric currents and several graphical output routines were made operational on a VAX 780 in our research facility. Software performance was evaluated using an input matchstick ionospheric current array, field aligned currents were calculated and magnetic perturbations along hypothetical satellite orbits were calculated. The basic operation of the model was verified. Software routines to analyze and display MAGSAT satellite data in terms of deviations with respect to the earth's internal field were also made operational during the first year effort. The complete set of MAGSAT data to be used for evaluation of the models was received at the end of the first year. A detailed annual report in May 1989 described these first year activities completely. That first annual report is included by reference in this final report. This document summarizes our additional activities during the second year of effort and describes the modeling software, its operation, and includes as an attachment the deliverable computer software specified under the contract.

  6. Effects of Combined Hands-on Laboratory and Computer Modeling on Student Learning of Gas Laws: A Quasi-Experimental Study

    ERIC Educational Resources Information Center

    Liu, Xiufeng

    2006-01-01

    Based on current theories of chemistry learning, this study intends to test a hypothesis that computer modeling enhanced hands-on chemistry laboratories are more effective than hands-on laboratories or computer modeling laboratories alone in facilitating high school students' understanding of chemistry concepts. Thirty-three high school chemistry…

  7. Computers, Modeling and Management Education. Technical Report No. 6.

    ERIC Educational Resources Information Center

    Bonini, Charles P.

    The report begins with a brief examination of the role of computer modeling in management decision-making. Then, some of the difficulties of implementing computer modeling are examined, and finally, the educational implications of these issues are raised, and a comparison is made between what is currently being done and what might be done to…

  8. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  9. Some Observations on the Current Status of Performing Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.

    2015-01-01

    Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.

  10. Computational Modeling and Treatment Identification in the Myelodysplastic Syndromes.

    PubMed

    Drusbosky, Leylah M; Cogle, Christopher R

    2017-10-01

    This review discusses the need for computational modeling in myelodysplastic syndromes (MDS) and early test results. As our evolving understanding of MDS reveals a molecularly complicated disease, the need for sophisticated computer analytics is required to keep track of the number and complex interplay among the molecular abnormalities. Computational modeling and digital drug simulations using whole exome sequencing data input have produced early results showing high accuracy in predicting treatment response to standard of care drugs. Furthermore, the computational MDS models serve as clinically relevant MDS cell lines for pre-clinical assays of investigational agents. MDS is an ideal disease for computational modeling and digital drug simulations. Current research is focused on establishing the prediction value of computational modeling. Future research will test the clinical advantage of computer-informed therapy in MDS.

  11. Teaching Using Computer Games

    ERIC Educational Resources Information Center

    Miller, Lee Dee; Shell, Duane; Khandaker, Nobel; Soh, Leen-Kiat

    2011-01-01

    Computer games have long been used for teaching. Current reviews lack categorization and analysis using learning models which would help instructors assess the usefulness of computer games. We divide the use of games into two classes: game playing and game development. We discuss the Input-Process-Outcome (IPO) model for the learning process when…

  12. A review of computer evacuation models and their data needs.

    DOT National Transportation Integrated Search

    1994-05-01

    This document reviews the history and current status of computer models of the evacuation of an airliner cabin. Basic concepts upon which evacuation models are based are discussed, followed by a review of the Civil Aerospace Medical Institute s effor...

  13. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  14. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.

  15. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  16. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less

  17. Application of a range of turbulence energy models to the determination of M4 tidal current profiles

    NASA Astrophysics Data System (ADS)

    Xing, Jiuxing; Davies, Alan M.

    1996-04-01

    A fully nonlinear, three-dimensional hydrodynamic model of the Irish Sea, using a range of turbulence energy sub-models, is used to examine the influence of the turbulence closure method upon the vertical variation of the current profile of the fundamental and higher harmonics of the tide in the region. Computed tidal current profiles are compared with previous calculations using a spectral model with eddy viscosity related to the flow field. The model has a sufficiently fine grid to resolve the advection terms, in particular the advection of turbulence and momentum. Calculations show that the advection of turbulence energy does not have a significant influence upon the current profile of either the fundamental or higher harmonic of the tide, although the advection of momentum is important in the region of headlands. The simplification of the advective terms by only including them in their vertically integrated form does not appear to make a significant difference to current profiles, but does reduce the computational effort by a significant amount. Computed current profiles both for the fundamental and the higher harmonic determined with a prognostic equation for turbulence and an algebraic mixing length formula, are as accurate as those determined with a two prognostic equation model (the so called q2- q2l model), provided the mixing length is specified correctly. A simple, flow-dependent eddy viscosity with a parabolic variation of viscosity also performs equally well.

  18. Applying mathematical modeling to create job rotation schedules for minimizing occupational noise exposure.

    PubMed

    Tharmmaphornphilas, Wipawee; Green, Benjamin; Carnahan, Brian J; Norman, Bryan A

    2003-01-01

    This research developed worker schedules by using administrative controls and a computer programming model to reduce the likelihood of worker hearing loss. By rotating the workers through different jobs during the day it was possible to reduce their exposure to hazardous noise levels. Computer simulations were made based on data collected in a real setting. Worker schedules currently used at the site are compared with proposed worker schedules from the computer simulations. For the worker assignment plans found by the computer model, the authors calculate a significant decrease in time-weighted average (TWA) sound level exposure. The maximum daily dose that any worker is exposed to is reduced by 58.8%, and the maximum TWA value for the workers is reduced by 3.8 dB from the current schedule.

  19. FDTD Modeling of LEMP Propagation in the Earth-Ionosphere Waveguide With Emphasis on Realistic Representation of Lightning Source

    NASA Astrophysics Data System (ADS)

    Tran, Thang H.; Baba, Yoshihiro; Somu, Vijaya B.; Rakov, Vladimir A.

    2017-12-01

    The finite difference time domain (FDTD) method in the 2-D cylindrical coordinate system was used to compute the nearly full-frequency-bandwidth vertical electric field and azimuthal magnetic field waveforms produced on the ground surface by lightning return strokes. The lightning source was represented by the modified transmission-line model with linear current decay with height, which was implemented in the FDTD computations as an appropriate vertical phased-current-source array. The conductivity of atmosphere was assumed to increase exponentially with height, with different conductivity profiles being used for daytime and nighttime conditions. The fields were computed at distances ranging from 50 to 500 km. Sky waves (reflections from the ionosphere) were identified in computed waveforms and used for estimation of apparent ionospheric reflection heights. It was found that our model reproduces reasonably well the daytime electric field waveforms measured at different distances and simulated (using a more sophisticated propagation model) by Qin et al. (2017). Sensitivity of model predictions to changes in the parameters of atmospheric conductivity profile, as well as influences of the lightning source characteristics (current waveshape parameters, return-stroke speed, and channel length) and ground conductivity were examined.

  20. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  1. Computational challenges in modeling gene regulatory events.

    PubMed

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  2. The inclusion of ocean-current effects in a tidal-current model as forcing in the convection term and its application to the mesoscale fate of CO2 seeping from the seafloor

    NASA Astrophysics Data System (ADS)

    Sakaizawa, Ryosuke; Kawai, Takaya; Sato, Toru; Oyama, Hiroyuki; Tsumune, Daisuke; Tsubono, Takaki; Goto, Koichi

    2018-03-01

    The target seas of tidal-current models are usually semi-closed bays, minimally affected by ocean currents. For these models, tidal currents are simulated in computational domains with a spatial scale of a couple hundred kilometers or less, by setting tidal elevations at their open boundaries. However, when ocean currents cannot be ignored in the sea areas of interest, such as in open seas near coastlines, it is necessary to include ocean-current effects in these tidal-current models. In this study, we developed a numerical method to analyze tidal currents near coasts by incorporating pre-calculated ocean-current velocities. First, a large regional-scale simulation with a spatial scale of several thousand kilometers was conducted and temporal changes in the ocean-current velocity at each grid point were stored. Next, the spatially and temporally interpolated ocean-current velocity was incorporated as forcing into the cross terms of the convection term of a tidal-current model having computational domains with spatial scales of hundreds of kilometers or less. Then, we applied this method to the diffusion of dissolved CO2 in a sea area off Tomakomai, Japan, and compared the numerical results and measurements to validate the proposed method.

  3. Parallel stochastic simulation of macroscopic calcium currents.

    PubMed

    González-Vélez, Virginia; González-Vélez, Horacio

    2007-06-01

    This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.

  4. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  5. PREDICTING ATTENUATION OF VIRUSES DURING PERCOLATION IN SOILS: 2. USER'S GUIDE TO THE VIRULO 1.0 COMPUTER MODEL

    EPA Science Inventory

    In the EPA document Predicting Attenuation of Viruses During Percolation in Soils 1. Probabilistic Model the conceptual, theoretical, and mathematical foundations for a predictive screening model were presented. In this current volume we present a User's Guide for the computer mo...

  6. Computational Fluid Dynamics of Whole-Body Aircraft

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh

    1999-01-01

    The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  7. Computational challenges in modeling gene regulatory events

    PubMed Central

    Pataskar, Abhijeet; Tiwari, Vijay K.

    2016-01-01

    ABSTRACT Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating “omics” data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology. PMID:27390891

  8. A Computational Model of the Ionic Currents, Ca2+ Dynamics and Action Potentials Underlying Contraction of Isolated Uterine Smooth Muscle

    PubMed Central

    Tong, Wing-Chiu; Choi, Cecilia Y.; Karche, Sanjay; Holden, Arun V.; Zhang, Henggui; Taggart, Michael J.

    2011-01-01

    Uterine contractions during labor are discretely regulated by rhythmic action potentials (AP) of varying duration and form that serve to determine calcium-dependent force production. We have employed a computational biology approach to develop a fuller understanding of the complexity of excitation-contraction (E-C) coupling of uterine smooth muscle cells (USMC). Our overall aim is to establish a mathematical platform of sufficient biophysical detail to quantitatively describe known uterine E-C coupling parameters and thereby inform future empirical investigations of physiological and pathophysiological mechanisms governing normal and dysfunctional labors. From published and unpublished data we construct mathematical models for fourteen ionic currents of USMCs: currents (L- and T-type), current, an hyperpolarization-activated current, three voltage-gated currents, two -activated current, -activated current, non-specific cation current, - exchanger, - pump and background current. The magnitudes and kinetics of each current system in a spindle shaped single cell with a specified surface area∶volume ratio is described by differential equations, in terms of maximal conductances, electrochemical gradient, voltage-dependent activation/inactivation gating variables and temporal changes in intracellular computed from known fluxes. These quantifications are validated by the reconstruction of the individual experimental ionic currents obtained under voltage-clamp. Phasic contraction is modeled in relation to the time constant of changing . This integrated model is validated by its reconstruction of the different USMC AP configurations (spikes, plateau and bursts of spikes), the change from bursting to plateau type AP produced by estradiol and of simultaneous experimental recordings of spontaneous AP, and phasic force. In summary, our advanced mathematical model provides a powerful tool to investigate the physiological ionic mechanisms underlying the genesis of uterine electrical E-C coupling of labor and parturition. This will furnish the evolution of descriptive and predictive quantitative models of myometrial electrogenesis at the whole cell and tissue levels. PMID:21559514

  9. EVALUATION OF PHYSIOLOGY COMPUTER MODELS, AND THE FEASIBILITY OF THEIR USE IN RISK ASSESSMENT.

    EPA Science Inventory

    This project will evaluate the current state of quantitative models that simulate physiological processes, and the how these models might be used in conjunction with the current use of PBPK and BBDR models in risk assessment. The work will include a literature search to identify...

  10. Computer simulation modeling of recreation use: Current status, case studies, and future directions

    Treesearch

    David N. Cole

    2005-01-01

    This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...

  11. Current problems in applied mathematics and mathematical modeling

    NASA Astrophysics Data System (ADS)

    Alekseev, A. S.

    Papers are presented on mathematical modeling noting applications to such fields as geophysics, chemistry, atmospheric optics, and immunology. Attention is also given to models of ocean current fluxes, atmospheric and marine interactions, and atmospheric pollution. The articles include studies of catalytic reactors, models of global climate phenomena, and computer-assisted atmospheric models.

  12. A Home Computer Primer.

    ERIC Educational Resources Information Center

    Stone, Antonia

    1982-01-01

    Provides general information on currently available microcomputers, computer programs (software), hardware requirements, software sources, costs, computer games, and programing. Includes a list of popular microcomputers, providing price category, model, list price, software (cassette, tape, disk), monitor specifications, amount of random access…

  13. Dispersive FDTD analysis of induced electric field in human models due to electrostatic discharge.

    PubMed

    Hirata, Akimasa; Nagai, Toshihiro; Koyama, Teruyoshi; Hattori, Junya; Chan, Kwok Hung; Kavet, Robert

    2012-07-07

    Contact currents flow from/into a charged human body when touching a grounded conductive object. An electrostatic discharge (ESD) or spark may occur just before contact or upon release. The current may stimulate muscles and peripheral nerves. In order to clarify the difference in the induced electric field between different sized human models, the in-situ electric fields were computed in anatomically based models of adults and a child for a contact current in a human body following ESD. A dispersive finite-difference time-domain method was used, in which biological tissue is assumed to obey a four-pole Debye model. From our computational results, the first peak of the discharge current was almost identical across adult and child models. The decay of the induced current in the child was also faster due mainly to its smaller body capacitance compared to the adult models. The induced electric fields in the forefingers were comparable across different models. However, the electric field induced in the arm of the child model was found to be greater than that in the adult models primarily because of its smaller cross-sectional area. The tendency for greater doses in the child has also been reported for power frequency sinusoidal contact current exposures as reported by other investigators.

  14. Modeling of anomalous electron mobility in Hall thrusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koo, Justin W.; Boyd, Iain D.

    Accurate modeling of the anomalous electron mobility is absolutely critical for successful simulation of Hall thrusters. In this work, existing computational models for the anomalous electron mobility are used to simulate the UM/AFRL P5 Hall thruster (a 5 kW laboratory model) in a two-dimensional axisymmetric hybrid particle-in-cell Monte Carlo collision code. Comparison to experimental results indicates that, while these computational models can be tuned to reproduce the correct thrust or discharge current, it is very difficult to match all integrated performance parameters (thrust, power, discharge current, etc.) simultaneously. Furthermore, multiple configurations of these computational models can produce reasonable integrated performancemore » parameters. A semiempirical electron mobility profile is constructed from a combination of internal experimental data and modeling assumptions. This semiempirical electron mobility profile is used in the code and results in more accurate simulation of both the integrated performance parameters and the mean potential profile of the thruster. Results indicate that the anomalous electron mobility, while absolutely necessary in the near-field region, provides a substantially smaller contribution to the total electron mobility in the high Hall current region near the thruster exit plane.« less

  15. Simulation of multi-pulse coaxial helicity injection in the Sustained Spheromak Physics Experiment

    NASA Astrophysics Data System (ADS)

    O'Bryan, J. B.; Romero-Talamás, C. A.; Woodruff, S.

    2018-03-01

    Nonlinear, numerical computation with the NIMROD code is used to explore magnetic self-organization during multi-pulse coaxial helicity injection in the Sustained Spheromak Physics eXperiment. We describe multiple distinct phases of spheromak evolution, starting from vacuum magnetic fields and the formation of the initial magnetic flux bubble through multiple refluxing pulses and the eventual onset of the column mode instability. Experimental and computational magnetic diagnostics agree on the onset of the column mode instability, which first occurs during the second refluxing pulse of the simulated discharge. Our computations also reproduce the injector voltage traces, despite only specifying the injector current and not explicitly modeling the external capacitor bank circuit. The computations demonstrate that global magnetic evolution is fairly robust to different transport models and, therefore, that a single fluid-temperature model is sufficient for a broader, qualitative assessment of spheromak performance. Although discharges with similar traces of normalized injector current produce similar global spheromak evolution, details of the current distribution during the column mode instability impact the relative degree of poloidal flux amplification and magnetic helicity content.

  16. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  17. Computer Games versus Maps before Reading Stories: Priming Readers' Spatial Situation Models

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Majchrzak, Dan; Hayes, Shelley; Drobisz, Jack

    2011-01-01

    The current study investigated how computer games and maps compare as preparation for readers to comprehend and retain spatial relations in text narratives. Readers create situation models of five dimensions: spatial, temporal, causal, goal, and protagonist (Zwaan, Langston, & Graesser 1995). Of these five, readers mentally model the spatial…

  18. Computational modeling of cardiac hemodynamics: Current status and future outlook

    NASA Astrophysics Data System (ADS)

    Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.

    2016-01-01

    The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.

  19. A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

    1992-01-01

    A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

  20. An alternative low-loss stack topology for vanadium redox flow battery: Comparative assessment

    NASA Astrophysics Data System (ADS)

    Moro, Federico; Trovò, Andrea; Bortolin, Stefano; Del, Davide, , Col; Guarnieri, Massimo

    2017-02-01

    Two vanadium redox flow battery topologies have been compared. In the conventional series stack, bipolar plates connect cells electrically in series and hydraulically in parallel. The alternative topology consists of cells connected in parallel inside stacks by means of monopolar plates in order to reduce shunt currents along channels and manifolds. Channelled and flat current collectors interposed between cells were considered in both topologies. In order to compute the stack losses, an equivalent circuit model of a VRFB cell was built from a 2D FEM multiphysics numerical model based on Comsol®, accounting for coupled electrical, electrochemical, and charge and mass transport phenomena. Shunt currents were computed inside the cells with 3D FEM models and in the piping and manifolds by means of equivalent circuits solved with Matlab®. Hydraulic losses were computed with analytical models in piping and manifolds and with 3D numerical analyses based on ANSYS Fluent® in the cell porous electrodes. Total losses in the alternative topology resulted one order of magnitude lower than in an equivalent conventional battery. The alternative topology with channelled current collectors exhibits the lowest shunt currents and hydraulic losses, with round-trip efficiency higher by about 10%, as compared to the conventional topology.

  1. Dynamic interactions in neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbib, M.A.; Amari, S.

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  2. Comparison of Computer Based Instruction to Behavior Skills Training for Teaching Staff Implementation of Discrete-Trial Instruction with an Adult with Autism

    ERIC Educational Resources Information Center

    Nosik, Melissa R.; Williams, W. Larry; Garrido, Natalia; Lee, Sarah

    2013-01-01

    In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following…

  3. A Cognitive Model for Problem Solving in Computer Science

    ERIC Educational Resources Information Center

    Parham, Jennifer R.

    2009-01-01

    According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in…

  4. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  5. INTERNATIONAL CONFERENCE ON SEMICONDUCTOR INJECTION LASERS SELCO-87: Computer model for quasioptic waveguide lasers

    NASA Astrophysics Data System (ADS)

    Wenzel, H.; Wünsche, H. J.

    1988-11-01

    A description is given of a numerical model of a semiconductor laser with a quasioptic waveguide (index guide). This model can be used on a personal computer. The model can be used to find the radiation field distributions in the vertical and lateral directions, the pump currents at the threshold, and also to solve dynamic rate equations.

  6. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  7. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Inverse computation for cardiac sources using single current dipole and current multipole models

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Ma, Ping; Lu, Hong; Tang, Xue-Zheng; Hua, Ning; Tang, Fa-Kuan

    2009-12-01

    Two cardiac functional models are constructed in this paper. One is a single current model and the other is a current multipole model. Parameters denoting the properties of these two models are calculated by a least-square fit to the measurements using a simulated annealing algorithm. The measured signals are detected at 36 observation nodes by a superconducting quantum interference device (SQUID). By studying the trends of position, orientation and magnitude of the single current dipole model and the current multipole model in the QRS complex during one time span and comparing the reconstructed magnetocardiography (MCG) of these two cardiac models, we find that the current multipole model is a more appropriate model to represent cardiac electrophysiological activity.

  8. Electron kinematics in a plasma focus

    NASA Technical Reports Server (NTRS)

    Hohl, F.; Gary, S. P.

    1977-01-01

    The results of numerical integrations of the three-dimensional relativistic equations of motion of electrons subject to given electric and magnetic fields are presented. Fields due to two different models are studied: (1) a circular distribution of current filaments, and (2) a uniform current distribution; both the collapse and the current reduction phases are studied in each model. Decreasing current in the uniform current model yields 100 keV electrons accelerated toward the anode and, as for earlier ion computations, provides general agreement with experimental results.

  9. Two ways to model voltage current curves of adiabatic MgB2 wires

    NASA Astrophysics Data System (ADS)

    Stenvall, A.; Korpela, A.; Lehtonen, J.; Mikkonen, R.

    2007-08-01

    Usually overheating of the sample destroys attempts to measure voltage-current curves of conduction cooled high critical current MgB2 wires at low temperatures. Typically, when a quench occurs a wire burns out due to massive heat generation and negligible cooling. It has also been suggested that high n values measured with MgB2 wires and coils are not an intrinsic property of the material but arise due to heating during the voltage-current measurement. In addition, quite recently low n values for MgB2 wires have been reported. In order to find out the real properties of MgB2 an efficient computational model is required to simulate the voltage-current measurement. In this paper we go back to basics and consider two models to couple electromagnetic and thermal phenomena. In the first model the magnetization losses are computed according to the critical state model and the flux creep losses are considered separately. In the second model the superconductor resistivity is described by the widely used power law. Then the coupled current diffusion and heat conduction equations are solved with the finite element method. In order to compare the models, example runs are carried out with an adiabatic slab. Both models produce a similar significant temperature rise near the critical current which leads to fictitiously high n values.

  10. An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*

    PubMed Central

    Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.

    2014-01-01

    Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144

  11. Industry-Wide Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir (Compiler)

    1995-01-01

    This publication contains the presentations made at the Industry-Wide Workshop on Computational Turbulence Modeling which took place on October 6-7, 1994. The purpose of the workshop was to initiate the transfer of technology developed at Lewis Research Center to industry and to discuss the current status and the future needs of turbulence models in industrial CFD.

  12. Self-consistent modeling of the dynamic evolution of magnetic island growth in the presence of stabilizing electron-cyclotron current drive

    NASA Astrophysics Data System (ADS)

    Chatziantonaki, Ioanna; Tsironis, Christos; Isliker, Heinz; Vlahos, Loukas

    2013-11-01

    The most promising technique for the control of neoclassical tearing modes in tokamak experiments is the compensation of the missing bootstrap current with an electron-cyclotron current drive (ECCD). In this frame, the dynamics of magnetic islands has been studied extensively in terms of the modified Rutherford equation (MRE), including the presence of a current drive, either analytically described or computed by numerical methods. In this article, a self-consistent model for the dynamic evolution of the magnetic island and the driven current is derived, which takes into account the island's magnetic topology and its effect on the current drive. The model combines the MRE with a ray-tracing approach to electron-cyclotron wave-propagation and absorption. Numerical results exhibit a decrease in the time required for complete stabilization with respect to the conventional computation (not taking into account the island geometry), which increases by increasing the initial island size and radial misalignment of the deposition.

  13. Computational modeling of the effect of external electron injection into a direct-current microdischarge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panneer Chelvam, Prem Kumar; Raja, Laxminarayan L.

    2015-12-28

    Electron emission from the electrode surface plays an important role in determining the structure of a direct-current microdischarge. Here we have developed a computational model of a direct-current microdischarge to study the effect of external electron injection from the cathode surface into the discharge to manipulate its properties. The model provides a self-consistent, multi-species, multi-temperature fluid representation of the plasma. A microdischarge with a metal-insulator-metal configuration is chosen for this study. The effect of external electron injection on the structure and properties of the microdischarge is described. The transient behavior of the microdischarge during the electron injection is examined. Themore » nonlinearities in the dynamics of the plasma result in a large increase of conduction current after active electron injection. For the conditions simulated a switching time of ∼100 ns from a low-current to high-current discharge state is realized.« less

  14. A plug flow reactor model of a vanadium redox flow battery considering the conductive current collectors

    NASA Astrophysics Data System (ADS)

    König, S.; Suriyah, M. R.; Leibfried, T.

    2017-08-01

    A lumped-parameter model for vanadium redox flow batteries, which use metallic current collectors, is extended into a one-dimensional model using the plug flow reactor principle. Thus, the commonly used simplification of a perfectly mixed cell is no longer required. The resistances of the cell components are derived in the in-plane and through-plane directions. The copper current collector is the only component with a significant in-plane conductance, which allows for a simplified electrical network. The division of a full-scale flow cell into 10 layers in the direction of fluid flow represents a reasonable compromise between computational effort and accuracy. Due to the variations in the state of charge and thus the open circuit voltage of the electrolyte, the currents in the individual layers vary considerably. Hence, there are situations, in which the first layer, directly at the electrolyte input, carries a multiple of the last layer's current. The conventional model overestimates the cell performance. In the worst-case scenario, the more accurate 20-layer model yields a discharge capacity 9.4% smaller than that computed with the conventional model. The conductive current collector effectively eliminates the high over-potentials in the last layers of the plug flow reactor models that have been reported previously.

  15. Errors due to the truncation of the computational domain in static three-dimensional electrical impedance tomography.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Kaipio, J P

    2000-02-01

    In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.

  16. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  17. Dst Index in the 2008 GEM Modeling Challenge - Model Performance for Moderate and Strong Magnetic Storms

    NASA Technical Reports Server (NTRS)

    Rastaetter, Lutz; Kuznetsova, Maria; Hesse, Michael; Chulaki, Anna; Pulkkinen, Antti; Ridley, Aaron J.; Gombosi, Tamas; Vapirev, Alexander; Raeder, Joachim; Wiltberger, Michael James; hide

    2010-01-01

    The GEM 2008 modeling challenge efforts are expanding beyond comparing in-situ measurements in the magnetosphere and ionosphere to include the computation of indices to be compared. The Dst index measures the largest deviations of the horizontal magnetic field at 4 equatorial magnetometers from the quiet-time background field and is commonly used to track the strength of the magnetic disturbance of the magnetosphere during storms. Models can calculate a proxy Dst index in various ways, including using the Dessler-Parker Sckopke relation and the energy of the ring current and Biot-Savart integration of electric currents in the magnetosphere. The GEM modeling challenge investigates 4 space weather events and we compare models available at CCMC against each other and the observed values of Ost. Models used include SWMF/BATSRUS, OpenGGCM, LFM, GUMICS (3D magnetosphere MHD models), Fok-RC, CRCM, RAM-SCB (kinetic drift models of the ring current), WINDMI (magnetosphere-ionosphere electric circuit model), and predictions based on an impulse response function (IRF) model and analytic coupling functions with inputs of solar wind data. In addition to the analysis of model-observation comparisons we look at the way Dst is computed in global magnetosphere models. The default value of Dst computed by the SWMF model is for Bz the Earth's center. In addition to this, we present results obtained at different locations on the Earth's surface. We choose equatorial locations at local noon, dusk (18:00 hours), midnight and dawn (6:00 hours). The different virtual observatory locations reveal the variation around the earth-centered Dst value resulting from the distribution of electric currents in the magnetosphere during different phases of a storm.

  18. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1974-01-01

    A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.

  19. A computational model of the ionic currents, Ca2+ dynamics and action potentials underlying contraction of isolated uterine smooth muscle.

    PubMed

    Tong, Wing-Chiu; Choi, Cecilia Y; Kharche, Sanjay; Karche, Sanjay; Holden, Arun V; Zhang, Henggui; Taggart, Michael J

    2011-04-29

    Uterine contractions during labor are discretely regulated by rhythmic action potentials (AP) of varying duration and form that serve to determine calcium-dependent force production. We have employed a computational biology approach to develop a fuller understanding of the complexity of excitation-contraction (E-C) coupling of uterine smooth muscle cells (USMC). Our overall aim is to establish a mathematical platform of sufficient biophysical detail to quantitatively describe known uterine E-C coupling parameters and thereby inform future empirical investigations of physiological and pathophysiological mechanisms governing normal and dysfunctional labors. From published and unpublished data we construct mathematical models for fourteen ionic currents of USMCs: Ca2+ currents (L- and T-type), Na+ current, an hyperpolarization-activated current, three voltage-gated K+ currents, two Ca2+-activated K+ current, Ca2+-activated Cl current, non-specific cation current, Na+-Ca2+ exchanger, Na+-K+ pump and background current. The magnitudes and kinetics of each current system in a spindle shaped single cell with a specified surface area:volume ratio is described by differential equations, in terms of maximal conductances, electrochemical gradient, voltage-dependent activation/inactivation gating variables and temporal changes in intracellular Ca2+ computed from known Ca2+ fluxes. These quantifications are validated by the reconstruction of the individual experimental ionic currents obtained under voltage-clamp. Phasic contraction is modeled in relation to the time constant of changing [Ca2+]i. This integrated model is validated by its reconstruction of the different USMC AP configurations (spikes, plateau and bursts of spikes), the change from bursting to plateau type AP produced by estradiol and of simultaneous experimental recordings of spontaneous AP, [Ca2+]i and phasic force. In summary, our advanced mathematical model provides a powerful tool to investigate the physiological ionic mechanisms underlying the genesis of uterine electrical E-C coupling of labor and parturition. This will furnish the evolution of descriptive and predictive quantitative models of myometrial electrogenesis at the whole cell and tissue levels.

  20. Crops in silico: A community wide multi-scale computational modeling framework of plant canopies

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Christensen, A.; Borkiewic, K.; Yiwen, X.; Ellis, A.; Panneerselvam, B.; Kannan, K.; Shrivastava, S.; Cox, D.; Hart, J.; Marshall-Colon, A.; Long, S.

    2016-12-01

    Current crop models predict a looming gap between supply and demand for primary foodstuffs over the next 100 years. While significant yield increases were achieved in major food crops during the early years of the green revolution, the current rates of yield increases are insufficient to meet future projected food demand. Furthermore, with projected reduction in arable land, decrease in water availability, and increasing impacts of climate change on future food production, innovative technologies are required to sustainably improve crop yield. To meet these challenges, we are developing Crops in silico (Cis), a biologically informed, multi-scale, computational modeling framework that can facilitate whole plant simulations of crop systems. The Cis framework is capable of linking models of gene networks, protein synthesis, metabolic pathways, physiology, growth, and development in order to investigate crop response to different climate scenarios and resource constraints. This modeling framework will provide the mechanistic details to generate testable hypotheses toward accelerating directed breeding and engineering efforts to increase future food security. A primary objective for building such a framework is to create synergy among an inter-connected community of biologists and modelers to create a realistic virtual plant. This framework advantageously casts the detailed mechanistic understanding of individual plant processes across various scales in a common scalable framework that makes use of current advances in high performance and parallel computing. We are currently designing a user friendly interface that will make this tool equally accessible to biologists and computer scientists. Critically, this framework will provide the community with much needed tools for guiding future crop breeding and engineering, understanding the emergent implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment.

  1. Electrode Models for Electric Current Computed Tomography

    PubMed Central

    CHENG, KUO-SHENG; ISAACSON, DAVID; NEWELL, J. C.; GISSER, DAVID G.

    2016-01-01

    This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 Ω · cm were studied. Values of “effective” contact impedance z used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 Ω · cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an “effective” contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field. PMID:2777280

  2. Electrode models for electric current computed tomography.

    PubMed

    Cheng, K S; Isaacson, D; Newell, J C; Gisser, D G

    1989-09-01

    This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 omega.cm were studied. Values of "effective" contact impedance zeta used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 omega.cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an "effective" contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field.

  3. Computer modeling of high-voltage solar array experiment using the NASCAP/LEO (NASA Charging Analyzer Program/Low Earth Orbit) computer code

    NASA Astrophysics Data System (ADS)

    Reichl, Karl O., Jr.

    1987-06-01

    The relationship between the Interactions Measurement Payload for Shuttle (IMPS) flight experiment and the low Earth orbit plasma environment is discussed. Two interactions (parasitic current loss and electrostatic discharge on the array) may be detrimental to mission effectiveness. They result from the spacecraft's electrical potentials floating relative to plasma ground to achieve a charge flow equilibrium into the spacecraft. The floating potentials were driven by external biases applied to a solar array module of the Photovoltaic Array Space Power (PASP) experiment aboard the IMPS test pallet. The modeling was performed using the NASA Charging Analyzer Program/Low Earth Orbit (NASCAP/LEO) computer code which calculates the potentials and current collection of high-voltage objects in low Earth orbit. Models are developed by specifying the spacecraft, environment, and orbital parameters. Eight IMPS models were developed by varying the array's bias voltage and altering its orientation relative to its motion. The code modeled a typical low Earth equatorial orbit. NASCAP/LEO calculated a wide variety of possible floating potential and current collection scenarios. These varied directly with both the array bias voltage and with the vehicle's orbital orientation.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Jih-Sheng

    This paper introduces control system design based softwares, SIMNON and MATLAB/SIMULINK, for power electronics system simulation. A complete power electronics system typically consists of a rectifier bridge along with its smoothing capacitor, an inverter, and a motor. The system components, featuring discrete or continuous, linear or nonlinear, are modeled in mathematical equations. Inverter control methods,such as pulse-width-modulation and hysteresis current control, are expressed in either computer algorithms or digital circuits. After describing component models and control methods, computer programs are then developed for complete systems simulation. Simulation results are mainly used for studying system performances, such as input and outputmore » current harmonics, torque ripples, and speed responses. Key computer programs and simulation results are demonstrated for educational purposes.« less

  5. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  6. Radiogenomics and radiotherapy response modeling

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Kerns, Sarah L.; Coates, James; Luo, Yi; Speers, Corey; West, Catharine M. L.; Rosenstein, Barry S.; Ten Haken, Randall K.

    2017-08-01

    Advances in patient-specific information and biotechnology have contributed to a new era of computational medicine. Radiogenomics has emerged as a new field that investigates the role of genetics in treatment response to radiation therapy. Radiation oncology is currently attempting to embrace these recent advances and add to its rich history by maintaining its prominent role as a quantitative leader in oncologic response modeling. Here, we provide an overview of radiogenomics starting with genotyping, data aggregation, and application of different modeling approaches based on modifying traditional radiobiological methods or application of advanced machine learning techniques. We highlight the current status and potential for this new field to reshape the landscape of outcome modeling in radiotherapy and drive future advances in computational oncology.

  7. Reconstruction of electrocardiogram using ionic current models for heart muscles.

    PubMed

    Yamanaka, A; Okazaki, K; Urushibara, S; Kawato, M; Suzuki, R

    1986-11-01

    A digital computer model is presented for the simulation of the electrocardiogram during ventricular activation and repolarization (QRS-T waves). The part of the ventricular septum and the left ventricular free wall of the heart are represented by a two dimensional array of 730 homogeneous functional units. Ionic currents models are used to determine the spatial distribution of the electrical activities of these units at each instant of time during simulated cardiac cycle. In order to reconstruct the electrocardiogram, the model is expanded three-dimensionally with equipotential assumption along the third axis and then the surface potentials are calculated using solid angle method. Our digital computer model can be used to improve the understanding of the relationship between body surface potentials and intracellular electrical events.

  8. Quantum-assisted biomolecular modelling.

    PubMed

    Harris, Sarah A; Kendon, Vivien M

    2010-08-13

    Our understanding of the physics of biological molecules, such as proteins and DNA, is limited because the approximations we usually apply to model inert materials are not, in general, applicable to soft, chemically inhomogeneous systems. The configurational complexity of biomolecules means the entropic contribution to the free energy is a significant factor in their behaviour, requiring detailed dynamical calculations to fully evaluate. Computer simulations capable of taking all interatomic interactions into account are therefore vital. However, even with the best current supercomputing facilities, we are unable to capture enough of the most interesting aspects of their behaviour to properly understand how they work. This limits our ability to design new molecules, to treat diseases, for example. Progress in biomolecular simulation depends crucially on increasing the computing power available. Faster classical computers are in the pipeline, but these provide only incremental improvements. Quantum computing offers the possibility of performing huge numbers of calculations in parallel, when it becomes available. We discuss the current open questions in biomolecular simulation, how these might be addressed using quantum computation and speculate on the future importance of quantum-assisted biomolecular modelling.

  9. Computational Modeling of Single Neuron Extracellular Electric Potentials and Network Local Field Potentials using LFPsim.

    PubMed

    Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam

    2016-01-01

    Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.

  10. Mathematical modeling and computational prediction of cancer drug resistance.

    PubMed

    Sun, Xiaoqiang; Hu, Bin

    2017-06-23

    Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    Efforts in support of the development of a model of the magnetic fields due to ionospheric and magnetospheric electrical currents are discussed. Specifically, progress made in reading MAGSAT tapes and plotting the deviation of the measured magnetic field components with respect to a spherical harmonic model of the main geomagnetic field is reported. Initial tests of the modeling procedure developed to compute the ionosphere/magnetosphere-induced fields at satellite orbit are also described. The modeling technique utilizes a liner current element representation of the large scale current system.

  12. Novel opportunities for computational biology and sociology in drug discovery

    PubMed Central

    Yao, Lixia

    2009-01-01

    Drug discovery today is impossible without sophisticated modeling and computation. In this review we touch on previous advances in computational biology and by tracing the steps involved in pharmaceutical development, we explore a range of novel, high value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy-industry ties for scientific and human benefit. Attention to these opportunities could promise punctuated advance, and will complement the well-established computational work on which drug discovery currently relies. PMID:19674801

  13. Future Approach to tier-0 extension

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Cordeiro, C.; Giordano, D.; Traylen, S.; Moreno García, D.

    2017-10-01

    The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.

  14. Toward A Simulation-Based Tool for the Treatment of Vocal Fold Paralysis

    PubMed Central

    Mittal, Rajat; Zheng, Xudong; Bhardwaj, Rajneesh; Seo, Jung Hee; Xue, Qian; Bielamowicz, Steven

    2011-01-01

    Advances in high-performance computing are enabling a new generation of software tools that employ computational modeling for surgical planning. Surgical management of laryngeal paralysis is one area where such computational tools could have a significant impact. The current paper describes a comprehensive effort to develop a software tool for planning medialization laryngoplasty where a prosthetic implant is inserted into the larynx in order to medialize the paralyzed vocal fold (VF). While this is one of the most common procedures used to restore voice in patients with VF paralysis, it has a relatively high revision rate, and the tool being developed is expected to improve surgical outcomes. This software tool models the biomechanics of airflow-induced vibration in the human larynx and incorporates sophisticated approaches for modeling the turbulent laryngeal flow, the complex dynamics of the VFs, as well as the production of voiced sound. The current paper describes the key elements of the modeling approach, presents computational results that demonstrate the utility of the approach and also describes some of the limitations and challenges. PMID:21556320

  15. Using a Large Scale Computational Model to Study the Effect of Longitudinal and Radial Electrical Coupling in the Cochlea

    NASA Astrophysics Data System (ADS)

    Mistrík, Pavel; Ashmore, Jonathan

    2009-02-01

    We describe a large scale computational model of electrical current flow in the cochlea which is constructed by a flexible Modified Nodal Analysis algorithm to incorporate electrical components representing hair cells and the intercellular radial and longitudinal current flow. The model is used as a laboratory to study the effects of changing longitudinal gap junctional coupling, and shows the way in which cochlear microphonic spreads and tuning is affected. The process for incorporating mechanical longitudinal coupling and feedback is described. We find a difference in tuning and attenuation depending on whether longitudinal or radial couplings are altered.

  16. Systems, methods and computer-readable media to model kinetic performance of rechargeable electrochemical devices

    DOEpatents

    Gering, Kevin L.

    2013-01-01

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

  17. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    Progress made in reducing MAGSAT data and displaying magnetic field perturbations caused primarily by external currents is reported. A periodic and repeatable perturbation pattern is described that arises from external current effects but appears as unique signatures associated with upper middle latitudes on the Earth's surface. Initial testing of the modeling procedure that was developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is also discussed. The modeling technique utilizes a linear current element representation of the large scale space current system.

  18. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  19. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  20. Tuition Elasticity of the Demand for Higher Education among Current Students: A Pricing Model.

    ERIC Educational Resources Information Center

    Bryan, Glenn A.; Whipple, Thomas W.

    1995-01-01

    A pricing model is offered, based on retention of current students, that colleges can use to determine appropriate tuition. A computer-based model that quantifies the relationship between tuition elasticity and projected net return to the college was developed and applied to determine an appropriate tuition rate for a small, private liberal arts…

  1. Fast-slow asymptotics for a Markov chain model of fast sodium current

    NASA Astrophysics Data System (ADS)

    Starý, Tomáš; Biktashev, Vadim N.

    2017-09-01

    We explore the feasibility of using fast-slow asymptotics to eliminate the computational stiffness of discrete-state, continuous-time deterministic Markov chain models of ionic channels underlying cardiac excitability. We focus on a Markov chain model of fast sodium current, and investigate its asymptotic behaviour with respect to small parameters identified in different ways.

  2. Module-based multiscale simulation of angiogenesis in skeletal muscle

    PubMed Central

    2011-01-01

    Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529

  3. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 1. Mathematical models, computing methods, and results. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less

  4. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    NASA Astrophysics Data System (ADS)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  5. Manual of phosphoric acid fuel cell stack three-dimensional model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    A detailed distributed mathematical model of phosphoric acid fuel cell stack have been developed, with the FORTRAN computer program, for analyzing the temperature distribution in the stack and the associated current density distribution on the cell plates. Energy, mass, and electrochemical analyses in the stack were combined to develop the model. Several reasonable assumptions were made to solve this mathematical model by means of the finite differences numerical method.

  6. High Performance Computing Application: Solar Dynamo Model Project II, Corona and Heliosphere Component Initialization, Integration and Validation

    DTIC Science & Technology

    2015-06-24

    physically . While not distinct from IH models, they require inner boundary magnetic field and plasma property values, the latter not currently measured...initialization for the computational grid. Model integration continues until a physically consistent steady-state is attained. Because of the more... physical basis and greater likelihood of realistic solutions, only MHD-type coronal models were considered in the review. There are two major types of

  7. Beyond Computer Planning: Managing Educational Computer Innovations.

    ERIC Educational Resources Information Center

    Washington, Wenifort

    The vast underutilization of technology in educational environments suggests the need for more research to develop models to successfully adopt and diffuse computer systems in schools. Of 980 surveys mailed to various Ohio public schools, 529 were completed and returned to help determine current attitudes and perceptions of teachers and…

  8. Modeling Mendel's Laws on Inheritance in Computational Biology and Medical Sciences

    ERIC Educational Resources Information Center

    Singh, Gurmukh; Siddiqui, Khalid; Singh, Mankiran; Singh, Satpal

    2011-01-01

    The current research article is based on a simple and practical way of employing the computational power of widely available, versatile software MS Excel 2007 to perform interactive computer simulations for undergraduate/graduate students in biology, biochemistry, biophysics, microbiology, medicine in college and university classroom setting. To…

  9. Experimental and computational flow-field results for an all-body hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Cleary, Joseph W.

    1989-01-01

    A comprehensive test program is defined which is being implemented in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel for obtaining data on a generic all-body hypersonic vehicle for computational fluid dynamics (CFD) code validation. Computational methods (approximate inviscid methods and an upwind parabolized Navier-Stokes code) currently being applied to the all-body model are outlined. Experimental and computational results on surface pressure distributions and Pitot-pressure surveys for the basic sharp-nose model (without control surfaces) at a free-stream Mach number of 7 are presented.

  10. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  11. Analysis on trust influencing factors and trust model from multiple perspectives of online Auction

    NASA Astrophysics Data System (ADS)

    Yu, Wang

    2017-10-01

    Current reputation models lack the research on online auction trading completely so they cannot entirely reflect the reputation status of users and may cause problems on operability. To evaluate the user trust in online auction correctly, a trust computing model based on multiple influencing factors is established. It aims at overcoming the efficiency of current trust computing methods and the limitations of traditional theoretical trust models. The improved model comprehensively considers the trust degree evaluation factors of three types of participants according to different participation modes of online auctioneers, to improve the accuracy, effectiveness and robustness of the trust degree. The experiments test the efficiency and the performance of our model under different scale of malicious user, under environment like eBay and Sporas model. The experimental results analysis show the model proposed in this paper makes up the deficiency of existing model and it also has better feasibility.

  12. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  13. A conceptual and computational model of moral decision making in human and artificial agents.

    PubMed

    Wallach, Wendell; Franklin, Stan; Allen, Colin

    2010-07-01

    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent's selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. Copyright © 2010 Cognitive Science Society, Inc.

  14. Computation of turbulent flows-state-of-the-art, 1970

    NASA Technical Reports Server (NTRS)

    Reynolds, W. C.

    1972-01-01

    The state-of-the-art of turbulent flow computation is surveyed. The formulations were generalized to increase the range of their applicability, and the excitement of current debate on equation models was brought into the review. Some new ideas on the modeling of the pressure-strain term in the Reynolds stress equations are also suggested.

  15. Current Status on the use of Parallel Computing in Turbulent Reacting Flow Computations Involving Sprays, Monte Carlo PDF and Unstructured Grids. Chapter 4

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The state of the art in multidimensional combustor modeling as evidenced by the level of sophistication employed in terms of modeling and numerical accuracy considerations, is also dictated by the available computer memory and turnaround times afforded by present-day computers. With the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors, a solution procedure is developed that combines the novelty of the coupled CFD/spray/scalar Monte Carlo PDF (Probability Density Function) computations on unstructured grids with the ability to run on parallel architectures. In this approach, the mean gas-phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The gas-turbine combustor flows are often characterized by a complex interaction between various physical processes associated with the interaction between the liquid and gas phases, droplet vaporization, turbulent mixing, heat release associated with chemical kinetics, radiative heat transfer associated with highly absorbing and radiating species, among others. The rate controlling processes often interact with each other at various disparate time 1 and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and liquid phase evaporation in many practical combustion devices.

  16. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  17. Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach

    NASA Astrophysics Data System (ADS)

    Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.

    2017-08-01

    Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a realistic coastal region.

  18. Assessment in health care education - modelling and implementation of a computer supported scoring process.

    PubMed

    Alfredsson, Jayne; Plichart, Patrick; Zary, Nabil

    2012-01-01

    Research on computer supported scoring of assessments in health care education has mainly focused on automated scoring. Little attention has been given to how informatics can support the currently predominant human-based grading approach. This paper reports steps taken to develop a model for a computer supported scoring process that focuses on optimizing a task that was previously undertaken without computer support. The model was also implemented in the open source assessment platform TAO in order to study its benefits. Ability to score test takers anonymously, analytics on the graders reliability and a more time efficient process are example of observed benefits. A computer supported scoring will increase the quality of the assessment results.

  19. Emergence of Landauer transport from quantum dynamics: A model Hamiltonian approach

    NASA Astrophysics Data System (ADS)

    Pal, Partha Pratim; Ramakrishna, S.; Seideman, Tamar

    2018-04-01

    The Landauer expression for computing current-voltage characteristics in nanoscale devices is efficient but not suited to transient phenomena and a time-dependent current because it is applicable only when the charge carriers transition into a steady flux after an external perturbation. In this article, we construct a very general expression for time-dependent current in an electrode-molecule-electrode arrangement. Utilizing a model Hamiltonian (consisting of the subsystem energy levels and their electronic coupling terms), we propagate the Schrödinger wave function equation to numerically compute the time-dependent population in the individual subsystems. The current in each electrode (defined in terms of the rate of change of the corresponding population) has two components, one due to the charges originating from the same electrode and the other due to the charges initially residing at the other electrode. We derive an analytical expression for the first component and illustrate that it agrees reasonably with its numerical counterpart at early times. Exploiting the unitary evolution of a wavefunction, we construct a more general Landauer style formula and illustrate the emergence of Landauer transport from our simulations without the assumption of time-independent charge flow. Our generalized Landauer formula is valid at all times for models beyond the wide-band limit, non-uniform electrode density of states and for time and energy-dependent electronic coupling between the subsystems. Subsequently, we investigate the ingredients in our model that regulate the onset time scale of this steady state. We compare the performance of our general current expression with the Landauer current for time-dependent electronic coupling. Finally, we comment on the applicability of the Landauer formula to compute hot-electron current arising upon plasmon decoherence.

  20. Emergence of Landauer transport from quantum dynamics: A model Hamiltonian approach.

    PubMed

    Pal, Partha Pratim; Ramakrishna, S; Seideman, Tamar

    2018-04-14

    The Landauer expression for computing current-voltage characteristics in nanoscale devices is efficient but not suited to transient phenomena and a time-dependent current because it is applicable only when the charge carriers transition into a steady flux after an external perturbation. In this article, we construct a very general expression for time-dependent current in an electrode-molecule-electrode arrangement. Utilizing a model Hamiltonian (consisting of the subsystem energy levels and their electronic coupling terms), we propagate the Schrödinger wave function equation to numerically compute the time-dependent population in the individual subsystems. The current in each electrode (defined in terms of the rate of change of the corresponding population) has two components, one due to the charges originating from the same electrode and the other due to the charges initially residing at the other electrode. We derive an analytical expression for the first component and illustrate that it agrees reasonably with its numerical counterpart at early times. Exploiting the unitary evolution of a wavefunction, we construct a more general Landauer style formula and illustrate the emergence of Landauer transport from our simulations without the assumption of time-independent charge flow. Our generalized Landauer formula is valid at all times for models beyond the wide-band limit, non-uniform electrode density of states and for time and energy-dependent electronic coupling between the subsystems. Subsequently, we investigate the ingredients in our model that regulate the onset time scale of this steady state. We compare the performance of our general current expression with the Landauer current for time-dependent electronic coupling. Finally, we comment on the applicability of the Landauer formula to compute hot-electron current arising upon plasmon decoherence.

  1. From mitochondrial ion channels to arrhythmias in the heart: computational techniques to bridge the spatio-temporal scales

    PubMed Central

    Plank, Gernot; Zhou, Lufang; Greenstein, Joseph L; Cortassa, Sonia; Winslow, Raimond L; O'Rourke, Brian; Trayanova, Natalia A

    2008-01-01

    Computer simulations of electrical behaviour in the whole ventricles have become commonplace during the last few years. The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning, that lifts some of the restrictions imposed by current state-of-the-art ionic models. To illustrate the latter approach, the present study uses a recently developed ionic model of the ventricular myocyte that incorporates an excitation–contraction coupling and mitochondrial energetics model. A paradigm to bridge the vastly disparate spatial and temporal scales, from subcellular processes to the entire organ, and from sub-microseconds to minutes, is presented. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level, and ultimately to the development of novel therapeutic applications. PMID:18603526

  2. Development of a numerical model for the electric current in burner-stabilised methane-air flames

    NASA Astrophysics Data System (ADS)

    Speelman, N.; de Goey, L. P. H.; van Oijen, J. A.

    2015-03-01

    This study presents a new model to simulate the electric behaviour of one-dimensional ionised flames and to predict the electric currents in these flames. The model utilises Poisson's equation to compute the electric potential. A multi-component diffusion model, including the influence of an electric field, is used to model the diffusion of neutral and charged species. The model is incorporated into the existing CHEM1D flame simulation software. A comparison between the computed electric currents and experimental values from the literature shows good qualitative agreement for the voltage-current characteristic. Physical phenomena, such as saturation and the diodic effect, are captured by the model. The dependence of the saturation current on the equivalence ratio is also captured well for equivalence ratios between 0.6 and 1.2. Simulations show a clear relation between the saturation current and the total number of charged particles created. The model shows that the potential at which the electric field saturates is strongly dependent on the recombination rate and the diffusivity of the charged particles. The onset of saturation occurs because most created charged particles are withdrawn from the flame and because the electric field effects start dominating over mass based diffusion. It is shown that this knowledge can be used to optimise ionisation chemistry mechanisms. It is shown numerically that the so-called diodic effect is caused primarily by the distance the heavier cations have to travel to the cathode.

  3. Population of computational rabbit-specific ventricular action potential models for investigating sources of variability in cellular repolarisation.

    PubMed

    Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T Alexander

    2014-01-01

    Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.

  4. Population of Computational Rabbit-Specific Ventricular Action Potential Models for Investigating Sources of Variability in Cellular Repolarisation

    PubMed Central

    Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T. Alexander

    2014-01-01

    Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K+, inward rectifying K+, L-type Ca2+, and Na+/K+ pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation. PMID:24587229

  5. Modelling of eddy currents related to large angle magnetic suspension test fixture

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.; Foster, Lucas E.

    1994-01-01

    This report presents a preliminary analysis of the mathematical modelling of eddy current effects in a large-gap magnetic suspension system. It is shown that eddy currents can significantly affect the dynamic behavior and control of these systems, but are amenable to measurement and modelling. A theoretical framework is presented, together with a comparison of computed and experimental data related to the Large Angle Magnetic Suspension Test Fixture at NASA Langley Research Center.

  6. Developing the Polynomial Expressions for Fields in the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Sharma, Stephen

    2017-10-01

    The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomena are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.

  7. Expressions for Fields in the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Sharma, Stephen

    2017-10-01

    The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomenon are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.

  8. A Parametric Computational Analysis into Galvanic Coupling Intrabody Communication.

    PubMed

    Callejon, M Amparo; Del Campo, P; Reina-Tosina, Javier; Roa, Laura M

    2017-08-02

    Intrabody Communication (IBC) uses the human body tissues as transmission media for electrical signals to interconnect personal health devices in wireless body area networks. The main goal of this work is to conduct a computational analysis covering some bioelectric issues that still have not been fully explained, such as the modeling of the skin-electrode impedance, the differences associated to the use of constant voltage or current excitation modes, or the influence on attenuation of the subject's anthropometrical and bioelectric properties. With this aim, a computational finite element model has been developed, allowing the IBC channel attenuation as well as the electric field and current density through arm tissues to be computed as a function of these parameters. As a conclusion, this parametric analysis has in turn permitted us to disclose some knowledge about the causes and effects of the above-mentioned issues, thus explaining and complementing previous results reported in the literature.

  9. Multi-phase models for water and thermal management of proton exchange membrane fuel cell: A review

    NASA Astrophysics Data System (ADS)

    Zhang, Guobin; Jiao, Kui

    2018-07-01

    The 3D (three-dimensional) multi-phase CFD (computational fluid dynamics) model is widely utilized in optimizing water and thermal management of PEM (proton exchange membrane) fuel cell. However, a satisfactory 3D multi-phase CFD model which is able to simulate the detailed gas and liquid two-phase flow in channels and reflect its effect on performance precisely is still not developed due to the coupling difficulties and computation amount. Meanwhile, the agglomerate model of CL (catalyst layer) should also be added in 3D CFD model so as to better reflect the concentration loss and optimize CL structure in macroscopic scale. Besides, the effect of thermal management is perhaps underestimated in current 3D multi-phase CFD simulations due to the lack of coolant channel in computation domain and constant temperature boundary condition. Therefore, the 3D CFD simulations in cell and stack levels with convection boundary condition are suggested to simulate the water and thermal management more accurately. Nevertheless, with the rapid development of PEM fuel cell, current 3D CFD simulations are far from practical demand, especially at high current density and low to zero humidity and for the novel designs developed recently, such as: metal foam flow field, 3D fine mesh flow field, anode circulation etc.

  10. CASL VMA Milestone Report FY16 (L3:VMA.VUQ.P13.08): Westinghouse Mixing with STAR-CCM+

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilkey, Lindsay Noelle

    2016-09-30

    STAR-CCM+ (STAR) is a high-resolution computational fluid dynamics (CFD) code developed by CD-adapco. STAR includes validated physics models and a full suite of turbulence models including ones from the k-ε and k-ω families. STAR is currently being developed to be able to do two phase flows, but the current focus of the software is single phase flow. STAR can use imported meshes or use the built in meshing software to create computation domains for CFD. Since the solvers generally require a fine mesh for good computational results, the meshes used with STAR tend to number in the millions of cells,more » with that number growing with simulation and geometry complexity. The time required to model the flow of a full 5x5 Mixing Vane Grid Assembly (5x5MVG) in the current STAR configuration is on the order of hours, and can be very computationally expensive. COBRA-TF (CTF) is a low-resolution subchannel code that can be trained using high fidelity data from STAR. CTF does not have turbulence models and instead uses a turbulent mixing coefficient β. With a properly calibrated β, CTF can be used a low-computational cost alternative to expensive full CFD calculations performed with STAR. During the Hi2Lo work with CTF and STAR, STAR-CCM+ will be used to calibrate β and to provide high-resolution results that can be used in the place of and in addition to experimental results to reduce the uncertainty in the CTF results.« less

  11. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  12. Software Systems for High-performance Quantum Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; Britt, Keith A

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less

  13. A simplified solar cell array modelling program

    NASA Technical Reports Server (NTRS)

    Hughes, R. D.

    1982-01-01

    As part of the energy conversion/self sufficiency efforts of DSN engineering, it was necessary to have a simplified computer model of a solar photovoltaic (PV) system. This article describes the analysis and simplifications employed in the development of a PV cell array computer model. The analysis of the incident solar radiation, steady state cell temperature and the current-voltage characteristics of a cell array are discussed. A sample cell array was modelled and the results are presented.

  14. Evaluation of a Computational Model of Situational Awareness

    NASA Technical Reports Server (NTRS)

    Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)

    2000-01-01

    Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.

  15. Longitudinal train dynamics: an overview

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Spiryagin, Maksym; Cole, Colin

    2016-12-01

    This paper discusses the evolution of longitudinal train dynamics (LTD) simulations, which covers numerical solvers, vehicle connection systems, air brake systems, wagon dumper systems and locomotives, resistance forces and gravitational components, vehicle in-train instabilities, and computing schemes. A number of potential research topics are suggested, such as modelling of friction, polymer, and transition characteristics for vehicle connection simulations, studies of wagon dumping operations, proper modelling of vehicle in-train instabilities, and computing schemes for LTD simulations. Evidence shows that LTD simulations have evolved with computing capabilities. Currently, advanced component models that directly describe the working principles of the operation of air brake systems, vehicle connection systems, and traction systems are available. Parallel computing is a good solution to combine and simulate all these advanced models. Parallel computing can also be used to conduct three-dimensional long train dynamics simulations.

  16. Cloud Computing in the Curricula of Schools of Computer Science and Information Systems

    ERIC Educational Resources Information Center

    Lawler, James P.

    2011-01-01

    The cloud continues to be a developing area of information systems. Evangelistic literature in the practitioner field indicates benefit for business firms but disruption for technology departments of the firms. Though the cloud currently is immature in methodology, this study defines a model program by which computer science and information…

  17. Parental Perceptions and Recommendations of Computing Majors: A Technology Acceptance Model Approach

    ERIC Educational Resources Information Center

    Powell, Loreen; Wimmer, Hayden

    2017-01-01

    Currently, there are more technology related jobs then there are graduates in supply. The need to understand user acceptance of computing degrees is the first step in increasing enrollment in computing fields. Additionally, valid measurement scales for predicting user acceptance of Information Technology degree programs are required. The majority…

  18. Can cloud computing benefit health services? - a SWOT analysis.

    PubMed

    Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth

    2011-01-01

    In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.

  19. Advanced computer techniques for inverse modeling of electric current in cardiac tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.

    1996-08-01

    For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.

  20. Load Balancing Strategies for Multiphase Flows on Structured Grids

    NASA Astrophysics Data System (ADS)

    Olshefski, Kristopher; Owkes, Mark

    2017-11-01

    The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.

  1. Quantitative, steady-state properties of Catania's computational model of the operant reserve.

    PubMed

    Berg, John P; McDowell, J J

    2011-05-01

    Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  3. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  4. Crew appliance computer program manual, volume 1

    NASA Technical Reports Server (NTRS)

    Russell, D. J.

    1975-01-01

    Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.

  5. Aerothermodynamics of Blunt Body Entry Vehicles. Chapter 3

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Borrelli, Salvatore

    2011-01-01

    In this chapter, the aerothermodynamic phenomena of blunt body entry vehicles are discussed. Four topics will be considered that present challenges to current computational modeling techniques for blunt body environments: turbulent flow, non-equilibrium flow, rarefied flow, and radiation transport. Examples of comparisons between computational tools to ground and flight-test data will be presented in order to illustrate the challenges existing in the numerical modeling of each of these phenomena and to provide test cases for evaluation of Computational Fluid Dynamics (CFD) code predictions.

  6. Aerothermodynamics of blunt body entry vehicles

    NASA Astrophysics Data System (ADS)

    Hollis, Brian R.; Borrelli, Salvatore

    2012-01-01

    In this chapter, the aerothermodynamic phenomena of blunt body entry vehicles are discussed. Four topics will be considered that present challenges to current computational modeling techniques for blunt body environments: turbulent flow, non-equilibrium flow, rarefied flow, and radiation transport. Examples of comparisons between computational tools to ground and flight-test data will be presented in order to illustrate the challenges existing in the numerical modeling of each of these phenomena and to provide test cases for evaluation of computational fluid dynamics (CFD) code predictions.

  7. Creating an Electronic Reference and Information Database for Computer-aided ECM Design

    NASA Astrophysics Data System (ADS)

    Nekhoroshev, M. V.; Pronichev, N. D.; Smirnov, G. V.

    2018-01-01

    The paper presents a review on electrochemical shaping. An algorithm has been developed to implement a computer shaping model applicable to pulse electrochemical machining. For that purpose, the characteristics of pulse current occurring in electrochemical machining of aviation materials have been studied. Based on integrating the experimental results and comprehensive electrochemical machining process data modeling, a subsystem for computer-aided design of electrochemical machining for gas turbine engine blades has been developed; the subsystem was implemented in the Teamcenter PLM system.

  8. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    NASA Astrophysics Data System (ADS)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  9. The point spread function of the human head and its implications for transcranial current stimulation

    NASA Astrophysics Data System (ADS)

    Dmochowski, Jacek P.; Bikson, Marom; Parra, Lucas C.

    2012-10-01

    Rational development of transcranial current stimulation (tCS) requires solving the ‘forward problem’: the computation of the electric field distribution in the head resulting from the application of scalp currents. Derivation of forward models has represented a major effort in brain stimulation research, with model complexity ranging from spherical shells to individualized head models based on magnetic resonance imagery. Despite such effort, an easily accessible benchmark head model is greatly needed when individualized modeling is either undesired (to observe general population trends as opposed to individual differences) or unfeasible. Here, we derive a closed-form linear system which relates the applied current to the induced electric potential. It is shown that in the spherical harmonic (Fourier) domain, a simple scalar multiplication relates the current density on the scalp to the electric potential in the brain. Equivalently, the current density in the head follows as the spherical convolution between the scalp current distribution and the point spread function of the head, which we derive. Thus, if one knows the spherical harmonic representation of the scalp current (i.e. the electrode locations and current intensity to be employed), one can easily compute the resulting electric field at any point inside the head. Conversely, one may also readily determine the scalp current distribution required to generate an arbitrary electric field in the brain (the ‘backward problem’ in tCS). We demonstrate the simplicity and utility of the model with a series of characteristic curves which sweep across a variety of stimulation parameters: electrode size, depth of stimulation, head size and anode-cathode separation. Finally, theoretically optimal montages for targeting an infinitesimal point in the brain are shown.

  10. Item-Specific Adaptation and the Conflict-Monitoring Hypothesis: A Computational Model

    ERIC Educational Resources Information Center

    Blais, Chris; Robidoux, Serje; Risko, Evan F.; Besner, Derek

    2007-01-01

    Comments on articles by Botvinick et al. and Jacob et al. M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, and J. D. Cohen (2001) implemented their conflict-monitoring hypothesis of cognitive control in a series of computational models. The authors of the current article first demonstrate that M. M. Botvinick et al.'s (2001)…

  11. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    2003-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT).

  12. A computational model for epidural electrical stimulation of spinal sensorimotor circuits.

    PubMed

    Capogrosso, Marco; Wenger, Nikolaus; Raspopovic, Stanisa; Musienko, Pavel; Beauparlant, Janine; Bassi Luciani, Lorenzo; Courtine, Grégoire; Micera, Silvestro

    2013-12-04

    Epidural electrical stimulation (EES) of lumbosacral segments can restore a range of movements after spinal cord injury. However, the mechanisms and neural structures through which EES facilitates movement execution remain unclear. Here, we designed a computational model and performed in vivo experiments to investigate the type of fibers, neurons, and circuits recruited in response to EES. We first developed a realistic finite element computer model of rat lumbosacral segments to identify the currents generated by EES. To evaluate the impact of these currents on sensorimotor circuits, we coupled this model with an anatomically realistic axon-cable model of motoneurons, interneurons, and myelinated afferent fibers for antagonistic ankle muscles. Comparisons between computer simulations and experiments revealed the ability of the model to predict EES-evoked motor responses over multiple intensities and locations. Analysis of the recruited neural structures revealed the lack of direct influence of EES on motoneurons and interneurons. Simulations and pharmacological experiments demonstrated that EES engages spinal circuits trans-synaptically through the recruitment of myelinated afferent fibers. The model also predicted the capacity of spatially distinct EES to modulate side-specific limb movements and, to a lesser extent, extension versus flexion. These predictions were confirmed during standing and walking enabled by EES in spinal rats. These combined results provide a mechanistic framework for the design of spinal neuroprosthetic systems to improve standing and walking after neurological disorders.

  13. From good intentions to healthy habits: towards integrated computational models of goal striving and habit formation.

    PubMed

    Pirolli, Peter

    2016-08-01

    Computational models were developed in the ACT-R neurocognitive architecture to address some aspects of the dynamics of behavior change. The simulations aim to address the day-to-day goal achievement data available from mobile health systems. The models refine current psychological theories of self-efficacy, intended effort, and habit formation, and provide an account for the mechanisms by which goal personalization, implementation intentions, and remindings work.

  14. Modelling DC responses of 3D complex fracture networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  15. Modelling DC responses of 3D complex fracture networks

    DOE PAGES

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    2018-03-01

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  16. Phytoplankton as Particles - A New Approach to Modeling Algal Blooms

    DTIC Science & Technology

    2013-07-01

    68  Figure 69. Amplitudes of lunar semi-diurnal and diurnal harmonics of observed and computed...particle behavior when the trajectory takes a particle outside the model domain. The rules associated with the present particle-tracking algorithms are... land - ward, although occasional reversals occurred. Amplitude of the current fluctuations was ≈ 20 cm s-1. Model residual currents for one year were

  17. The value and cost of complexity in predictive modelling: role of tissue anisotropic conductivity and fibre tracts in neuromodulation

    NASA Astrophysics Data System (ADS)

    Salman Shahid, Syed; Bikson, Marom; Salman, Humaira; Wen, Peng; Ahfock, Tony

    2014-06-01

    Objectives. Computational methods are increasingly used to optimize transcranial direct current stimulation (tDCS) dose strategies and yet complexities of existing approaches limit their clinical access. Since predictive modelling indicates the relevance of subject/pathology based data and hence the need for subject specific modelling, the incremental clinical value of increasingly complex modelling methods must be balanced against the computational and clinical time and costs. For example, the incorporation of multiple tissue layers and measured diffusion tensor (DTI) based conductivity estimates increase model precision but at the cost of clinical and computational resources. Costs related to such complexities aggregate when considering individual optimization and the myriad of potential montages. Here, rather than considering if additional details change current-flow prediction, we consider when added complexities influence clinical decisions. Approach. Towards developing quantitative and qualitative metrics of value/cost associated with computational model complexity, we considered field distributions generated by two 4 × 1 high-definition montages (m1 = 4 × 1 HD montage with anode at C3 and m2 = 4 × 1 HD montage with anode at C1) and a single conventional (m3 = C3-Fp2) tDCS electrode montage. We evaluated statistical methods, including residual error (RE) and relative difference measure (RDM), to consider the clinical impact and utility of increased complexities, namely the influence of skull, muscle and brain anisotropic conductivities in a volume conductor model. Main results. Anisotropy modulated current-flow in a montage and region dependent manner. However, significant statistical changes, produced within montage by anisotropy, did not change qualitative peak and topographic comparisons across montages. Thus for the examples analysed, clinical decision on which dose to select would not be altered by the omission of anisotropic brain conductivity. Significance. Results illustrate the need to rationally balance the role of model complexity, such as anisotropy in detailed current flow analysis versus value in clinical dose design. However, when extending our analysis to include axonal polarization, the results provide presumably clinically meaningful information. Hence the importance of model complexity may be more relevant with cellular level predictions of neuromodulation.

  18. Electronic field emission models beyond the Fowler-Nordheim one

    NASA Astrophysics Data System (ADS)

    Lepetit, Bruno

    2017-12-01

    We propose several quantum mechanical models to describe electronic field emission from first principles. These models allow us to correlate quantitatively the electronic emission current with the electrode surface details at the atomic scale. They all rely on electronic potential energy surfaces obtained from three dimensional density functional theory calculations. They differ by the various quantum mechanical methods (exact or perturbative, time dependent or time independent), which are used to describe tunneling through the electronic potential energy barrier. Comparison of these models between them and with the standard Fowler-Nordheim one in the context of one dimensional tunneling allows us to assess the impact on the accuracy of the computed current of the approximations made in each model. Among these methods, the time dependent perturbative one provides a well-balanced trade-off between accuracy and computational cost.

  19. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  20. A new assessment method of pHEMT models by comparing relative errors of drain current and its derivatives up to the third order

    NASA Astrophysics Data System (ADS)

    Dobeš, Josef; Grábner, Martin; Puričer, Pavel; Vejražka, František; Míchal, Jan; Popp, Jakub

    2017-05-01

    Nowadays, there exist relatively precise pHEMT models available for computer-aided design, and they are frequently compared to each other. However, such comparisons are mostly based on absolute errors of drain-current equations and their derivatives. In the paper, a novel method is suggested based on relative root-mean-square errors of both drain current and its derivatives up to the third order. Moreover, the relative errors are subsequently relativized to the best model in each category to further clarify obtained accuracies of both drain current and its derivatives. Furthermore, one our older and two newly suggested models are also included in comparison with the traditionally precise Ahmed, TOM-2 and Materka ones. The assessment is performed using measured characteristics of a pHEMT operating up to 110 GHz. Finally, a usability of the proposed models including the higher-order derivatives is illustrated using s-parameters analysis and measurement at more operating points as well as computation and measurement of IP3 points of a low-noise amplifier of a multi-constellation satellite navigation receiver with ATF-54143 pHEMT.

  1. Tomography in Geology: 3D Modeling and Analysis of Structural Features of Rocks Using Computed MicroTomography

    NASA Astrophysics Data System (ADS)

    Ponomarev, A. A.; Mamadaliev, R. A.; Semenova, T. V.

    2016-10-01

    The article presents a brief overview of the current state of computed tomography in the sphere of oil and gas production in Russia and in the world. Operation of computed microtomograph Skyscan 1172 is also provided, as well as personal examples of its application in solving geological problems.

  2. The Development of the Non-hydrostatic Unified Model of the Atmosphere (NUMA)

    DTIC Science & Technology

    2011-09-19

    capabilities: 1.  Highly scalable on current and future computer architectures ( exascale computing: this means CPUs and GPUs) 2.  Flexibility to use a...From Terascale to Petascale/ Exascale Computing •  10 of Top 500 are already in the Petascale range •  3 of top 10 are GPU-based machines 2

  3. Assessing the Purpose and Importance University Students Attribute to Current ICT Applications

    ERIC Educational Resources Information Center

    DiGiuseppe, Maurice; Partosoedarso, Elita

    2014-01-01

    In this study we surveyed students in a mid-sized university in Ontario, Canada to explore various aspects associated with their use of computer-based applications. For the purpose of analysis, the computer applications under study were categorized according to the Human-Computer-Human Interaction (HCHI) model of Desjardins (2005) in which…

  4. Safety parameter considerations of anodal transcranial Direct Current Stimulation in rats.

    PubMed

    Jackson, Mark P; Truong, Dennis; Brownlow, Milene L; Wagner, Jessica A; McKinley, R Andy; Bikson, Marom; Jankord, Ryan

    2017-08-01

    A commonly referenced transcranial Direct Current Stimulation (tDCS) safety threshold derives from tDCS lesion studies in the rat and relies on electrode current density (and related electrode charge density) to support clinical guidelines. Concerns about the role of polarity (e.g. anodal tDCS), sub-lesion threshold injury (e.g. neuroinflammatory processes), and role of electrode montage across rodent and human studies support further investigation into animal models of tDCS safety. Thirty-two anesthetized rats received anodal tDCS between 0 and 5mA for 60min through one of three epicranial electrode montages. Tissue damage was evaluated using hemotoxylin and eosin (H&E) staining, Iba-1 immunohistochemistry, and computational brain current density modeling. Brain lesion occurred after anodal tDCS at and above 0.5mA using a 25.0mm 2 electrode (electrode current density: 20.0A/m 2 ). Lesion initially occurred using smaller 10.6mm 2 or 5.3mm 2 electrodes at 0.25mA (23.5A/m 2 ) and 0.5mA (94.2A/m 2 ), respectively. Histological damage was correlated with computational brain current density predictions. Changes in microglial phenotype occurred in higher stimulation groups. Lesions were observed using anodal tDCS at an electrode current density of 20.0A/m 2 , which is below the previously reported safety threshold of 142.9A/m 2 using cathodal tDCS. The lesion area is not simply predicted by electrode current density (and so not by charge density as duration was fixed); rather computational modeling suggests average brain current density as a better predictor for anodal tDCS. Nonetheless, under the assumption that rodent epicranial stimulation is a hypersensitive model, an electrode current density of 20.0A/m 2 represents a conservative threshold for clinical tDCS, which typically uses an electrode current density of 2A/m 2 when electrodes are placed on the skin (resulting in a lower brain current density). Copyright © 2017 Elsevier Inc. All rights reserved.

  5. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  6. Clinician accessible tools for GUI computational models of transcranial electrical stimulation: BONSAI and SPHERES.

    PubMed

    Truong, Dennis Q; Hüber, Mathias; Xie, Xihe; Datta, Abhishek; Rahman, Asif; Parra, Lucas C; Dmochowski, Jacek P; Bikson, Marom

    2014-01-01

    Computational models of brain current flow during transcranial electrical stimulation (tES), including transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), are increasingly used to understand and optimize clinical trials. We propose that broad dissemination requires a simple graphical user interface (GUI) software that allows users to explore and design montages in real-time, based on their own clinical/experimental experience and objectives. We introduce two complimentary open-source platforms for this purpose: BONSAI and SPHERES. BONSAI is a web (cloud) based application (available at neuralengr.com/bonsai) that can be accessed through any flash-supported browser interface. SPHERES (available at neuralengr.com/spheres) is a stand-alone GUI application that allow consideration of arbitrary montages on a concentric sphere model by leveraging an analytical solution. These open-source tES modeling platforms are designed go be upgraded and enhanced. Trade-offs between open-access approaches that balance ease of access, speed, and flexibility are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. A local-circulation model for Darrieus vertical-axis wind turbines

    NASA Astrophysics Data System (ADS)

    Masse, B.

    1986-04-01

    A new computational model for the aerodynamics of the vertical-axis wind turbine is presented. Based on the local-circulation method generalized for curved blades, combined with a wake model for the vertical-axis wind turbine, it differs markedly from current models based on variations in the streamtube momentum and vortex models using the lifting-line theory. A computer code has been developed to calculate the loads and performance of the Darrieus vertical-axis wind turbine. The results show good agreement with experimental data and compare well with other methods.

  8. Computational modeling in melanoma for novel drug discovery.

    PubMed

    Pennisi, Marzio; Russo, Giulia; Di Salvatore, Valentina; Candido, Saverio; Libra, Massimo; Pappalardo, Francesco

    2016-06-01

    There is a growing body of evidence highlighting the applications of computational modeling in the field of biomedicine. It has recently been applied to the in silico analysis of cancer dynamics. In the era of precision medicine, this analysis may allow the discovery of new molecular targets useful for the design of novel therapies and for overcoming resistance to anticancer drugs. According to its molecular behavior, melanoma represents an interesting tumor model in which computational modeling can be applied. Melanoma is an aggressive tumor of the skin with a poor prognosis for patients with advanced disease as it is resistant to current therapeutic approaches. This review discusses the basics of computational modeling in melanoma drug discovery and development. Discussion includes the in silico discovery of novel molecular drug targets, the optimization of immunotherapies and personalized medicine trials. Mathematical and computational models are gradually being used to help understand biomedical data produced by high-throughput analysis. The use of advanced computer models allowing the simulation of complex biological processes provides hypotheses and supports experimental design. The research in fighting aggressive cancers, such as melanoma, is making great strides. Computational models represent the key component to complement these efforts. Due to the combinatorial complexity of new drug discovery, a systematic approach based only on experimentation is not possible. Computational and mathematical models are necessary for bringing cancer drug discovery into the era of omics, big data and personalized medicine.

  9. A new software tool for computing Earth's atmospheric transmission of near- and far-infrared radiation

    NASA Technical Reports Server (NTRS)

    Lord, Steven D.

    1992-01-01

    This report describes a new software tool, ATRAN, which computes the transmittance of Earth's atmosphere at near- and far-infrared wavelengths. We compare the capabilities of this program with others currently available and demonstrate its utility for observational data calibration and reduction. The program employs current water-vapor and ozone models to produce fast and accurate transmittance spectra for wavelengths ranging from 0.8 microns to 10 mm.

  10. Study of Magnetospheric Currents and Resultant Surface Magnetic Variations.

    DTIC Science & Technology

    1980-04-17

    compressed as indicated by solar -wind data, automatically injected a ring current with a strength consistent with the observed Dst. The computed inner...Fig!ire 2?; bottoim panal of Figure 5_. Agreement is very acceptahle. The model overestimated the maximum depression of RX, but by a factor that is well...Contributed Papers Presented at the Solar -Terrestrial Physics Symposium, Innsbruck, 1978. Harel, M., R. A. Wolf, P. H. Reiff and M. Siiddy, Computer

  11. Structure of High Latitude Currents in Magnetosphere-Ionosphere Models

    NASA Astrophysics Data System (ADS)

    Wiltberger, M.; Rigler, E. J.; Merkin, V.; Lyon, J. G.

    2017-03-01

    Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model we examine the structure of the high latitude field-aligned current patterns. Each resolution was run for the entire Whole Heliosphere Interval which contained two high speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results obtained from the Weimer 2005 computing using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and narrow. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths also results in the cross polar cap potential (CPCP) pattern being concentrated in higher latitudes. Current-voltage relationships between the R1 and CPCP are quite similar at the higher resolution indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.

  12. Structure of high latitude currents in global magnetospheric-ionospheric models

    USGS Publications Warehouse

    Wiltberger, M; Rigler, E. J.; Merkin, V; Lyon, J. G

    2016-01-01

    Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model we examine the structure of the high latitude field-aligned current patterns. Each resolution was run for the entire Whole Heliosphere Interval which contained two high speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results obtained from the Weimer 2005 computing using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and narrow. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths also results in the cross polar cap potential (CPCP) pattern being concentrated in higher latitudes. Current-voltage relationships between the R1 and CPCP are quite similar at the higher resolution indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.

  13. Structure of high latitude currents in magnetosphere-ionosphere models

    NASA Astrophysics Data System (ADS)

    Wiltberger, M. J.; Lyon, J.; Merkin, V. G.; Rigler, E. J.

    2016-12-01

    Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model the structure of the high latitude field-aligned current patterns is examined. Each LFM resolution was run for the entire Whole Heliosphere Interval (WHI), which contained two high-speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results from the Weimer 2005 computed using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and confined. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths in the model also results in a better shielding of mid- and low-latitude ionosphere from the polar cap convection, also in agreement with observations. Current-voltage relationships between the R1 strength and the cross-polar cap potential (CPCP) are quite similar at the higher resolutions indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.

  14. Review of Railgun Modeling Techniques: The Computation of Railgun Force and Other Key Factors

    NASA Astrophysics Data System (ADS)

    Eckert, Nathan James

    Currently, railgun force modeling either uses the simple "railgun force equation" or finite element methods. It is proposed here that a middle ground exists that does not require the solution of partial differential equations, is more readily implemented than finite element methods, and is more accurate than the traditional force equation. To develop this method, it is necessary to examine the core railgun factors: power supply mechanisms, the distribution of current in the rails and in the projectile which slides between them (called the armature), the magnetic field created by the current flowing through these rails, the inductance gradient (a key factor in simplifying railgun analysis, referred to as L'), the resultant Lorentz force, and the heating which accompanies this action. Common power supply technologies are investigated, and the shape of their current pulses are modeled. The main causes of current concentration are described, and a rudimentary method for computing current distribution in solid rails and a rectangular armature is shown to have promising accuracy with respect to outside finite element results. The magnetic field is modeled with two methods using the Biot-Savart law, and generally good agreement is obtained with respect to finite element methods (5.8% error on average). To get this agreement, a factor of 2 is added to the original formulation after seeing a reliable offset with FEM results. Three inductance gradient calculations are assessed, and though all agree with FEM results, the Kerrisk method and a regression analysis method developed by Murugan et al. (referred to as the LRM here) perform the best. Six railgun force computation methods are investigated, including the traditional railgun force equation, an equation produced by Waindok and Piekielny, and four methods inspired by the work of Xu et al. Overall, good agreement between the models and outside data is found, but each model's accuracy varies significantly between comparisons. Lastly, an approximation of the temperature profile in railgun rails originally presented by McCorkle and Bahder is replicated. In total, this work describes railgun technology and moderately complex railgun modeling methods, but is inconclusive about the presence of a middle-ground modeling method.

  15. Current capabilities and future directions in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A summary of significant findings is given, followed by specific recommendations for future directions of emphasis for computational fluid dynamics development. The discussion is organized into three application areas: external aerodynamics, hypersonics, and propulsion - and followed by a turbulence modeling synopsis.

  16. Development and Validation of a Computational Model for Androgen Receptor Activity

    EPA Science Inventory

    Testing thousands of chemicals to identify potential androgen receptor (AR) agonists or antagonists would cost millions of dollars and take decades to complete using current validated methods. High-throughput in vitro screening (HTS) and computational toxicology approaches can mo...

  17. Visualization, documentation, analysis, and communication of large scale gene regulatory networks

    PubMed Central

    Longabaugh, William J.R.; Davidson, Eric H.; Bolouri, Hamid

    2009-01-01

    Summary Genetic regulatory networks (GRNs) are complex, large-scale, and spatially and temporally distributed. These characteristics impose challenging demands on computational GRN modeling tools, and there is a need for custom modeling tools. In this paper, we report on our ongoing development of BioTapestry, an open source, freely available computational tool designed specifically for GRN modeling. We also outline our future development plans, and give some examples of current applications of BioTapestry. PMID:18757046

  18. Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation

    NASA Astrophysics Data System (ADS)

    Kempe, René; Huang, Yu; Parra, Lucas C.

    2014-04-01

    Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.

  19. A computational workflow for designing silicon donor qubits

    DOE PAGES

    Humble, Travis S.; Ericson, M. Nance; Jakowski, Jacek; ...

    2016-09-19

    Developing devices that can reliably and accurately demonstrate the principles of superposition and entanglement is an on-going challenge for the quantum computing community. Modeling and simulation offer attractive means of testing early device designs and establishing expectations for operational performance. However, the complex integrated material systems required by quantum device designs are not captured by any single existing computational modeling method. We examine the development and analysis of a multi-staged computational workflow that can be used to design and characterize silicon donor qubit systems with modeling and simulation. Our approach integrates quantum chemistry calculations with electrostatic field solvers to performmore » detailed simulations of a phosphorus dopant in silicon. We show how atomistic details can be synthesized into an operational model for the logical gates that define quantum computation in this particular technology. In conclusion, the resulting computational workflow realizes a design tool for silicon donor qubits that can help verify and validate current and near-term experimental devices.« less

  20. Probing Supersymmetry with Neutral Current Scattering Experiments

    NASA Astrophysics Data System (ADS)

    Kurylov, A.; Ramsey-Musolf, M. J.; Su, S.

    2004-02-01

    We compute the supersymmetric contributions to the weak charges of the electron (QWe) and proton (QWp) in the framework of Minimal Supersymmetric Standard Model. We also consider the ratio of neutral current to charged current cross sections, R v and Rv¯ at v (v¯)-nucleus deep inelastic scattering, and compare the supersymmetric corrections with the deviations of these quantities from the Standard Model predictions implied by the recent NuTeV measurement.

  1. Clinical Pilot Study and Computational Modeling of Bitemporal Transcranial Direct Current Stimulation, and Safety of Repeated Courses of Treatment, in Major Depression.

    PubMed

    Ho, Kerrie-Anne; Bai, Siwei; Martin, Donel; Alonzo, Angelo; Dokos, Socrates; Loo, Colleen K

    2015-12-01

    This study aimed to examine a bitemporal (BT) transcranial direct current stimulation (tDCS) electrode montage for the treatment of depression through a clinical pilot study and computational modeling. The safety of repeated courses of stimulation was also examined. Four participants with depression who had previously received multiple courses of tDCS received a 4-week course of BT tDCS. Mood and neuropsychological function were assessed. The results were compared with previous courses of tDCS given to the same participants using different electrode montages. Computational modeling examined the electric field maps produced by the different montages. Three participants showed clinical improvement with BT tDCS (mean [SD] improvement, 49.6% [33.7%]). There were no adverse neuropsychological effects. Computational modeling showed that the BT montage activates the anterior cingulate cortices and brainstem, which are deep brain regions that are important for depression. However, a fronto-extracephalic montage stimulated these areas more effectively. No adverse effects were found in participants receiving up to 6 courses of tDCS. Bitemporal tDCS was safe and led to clinically meaningful efficacy in 3 of 4 participants. However, computational modeling suggests that the BT montage may not activate key brain regions in depression more effectively than another novel montage--fronto-extracephalic tDCS. There is also preliminary evidence to support the safety of up to 6 repeated courses of tDCS.

  2. Collaborative learning model inquiring based on digital game

    NASA Astrophysics Data System (ADS)

    Yuan, Jiugen; Xing, Ruonan

    2012-04-01

    With the development of computer education software, digital educational game has become an important part in our life, entertainment and education. Therefore how to make full use of digital game's teaching functions and educate through entertainment has become the focus of current research. The thesis make a connection between educational game and collaborative learning, the current popular teaching model, and concludes digital game-based collaborative learning model combined with teaching practice.

  3. Development of a new model for short period ocean tidal variations of Earth rotation

    NASA Astrophysics Data System (ADS)

    Schuh, Harald

    2015-08-01

    Within project SPOT (Short Period Ocean Tidal variations in Earth rotation) we develop a new high frequency Earth rotation model based on empirical ocean tide models. The main purpose of the SPOT model is its application to space geodetic observations such as GNSS and VLBI.We consider an empirical ocean tide model, which does not require hydrodynamic ocean modeling to determine ocean tidal angular momentum. We use here the EOT11a model of Savcenko & Bosch (2012), which is extended for some additional minor tides (e.g. M1, J1, T2). As empirical tidal models do not provide ocean tidal currents, which are re- quired for the computation of oceanic relative angular momentum, we implement an approach first published by Ray (2001) to estimate ocean tidal current veloci- ties for all tides considered in the extended EOT11a model. The approach itself is tested by application to tidal heights from hydrodynamic ocean tide models, which also provide tidal current velocities. Based on the tidal heights and the associated current velocities the oceanic tidal angular momentum (OTAM) is calculated.For the computation of the related short period variation of Earth rotation, we have re-examined the Euler-Liouville equation for an elastic Earth model with a liquid core. The focus here is on the consistent calculation of the elastic Love num- bers and associated Earth model parameters, which are considered in the Euler- Liouville equation for diurnal and sub-diurnal periods in the frequency domain.

  4. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models.

    PubMed

    Mazzoni, Alberto; Lindén, Henrik; Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T

    2015-12-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  5. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    PubMed Central

    Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024

  6. Cinema Fire Modelling by FDS

    NASA Astrophysics Data System (ADS)

    Glasa, J.; Valasek, L.; Weisenpacher, P.; Halada, L.

    2013-02-01

    Recent advances in computer fluid dynamics (CFD) and rapid increase of computational power of current computers have led to the development of CFD models capable to describe fire in complex geometries incorporating a wide variety of physical phenomena related to fire. In this paper, we demonstrate the use of Fire Dynamics Simulator (FDS) for cinema fire modelling. FDS is an advanced CFD system intended for simulation of the fire and smoke spread and prediction of thermal flows, toxic substances concentrations and other relevant parameters of fire. The course of fire in a cinema hall is described focusing on related safety risks. Fire properties of flammable materials used in the simulation were determined by laboratory measurements and validated by fire tests and computer simulations

  7. Modeling and Simulation of Explosively Driven Electromechanical Devices

    NASA Astrophysics Data System (ADS)

    Demmie, Paul N.

    2002-07-01

    Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.

  8. Complex systems and health behavior change: insights from cognitive science.

    PubMed

    Orr, Mark G; Plaut, David C

    2014-05-01

    To provide proof-of-concept that quantum health behavior can be instantiated as a computational model that is informed by cognitive science, the Theory of Reasoned Action, and quantum health behavior theory. We conducted a synthetic review of the intersection of quantum health behavior change and cognitive science. We conducted simulations, using a computational model of quantum health behavior (a constraint satisfaction artificial neural network) and tested whether the model exhibited quantum-like behavior. The model exhibited clear signs of quantum-like behavior. Quantum health behavior can be conceptualized as constraint satisfaction: a mitigation between current behavioral state and the social contexts in which it operates. We outlined implications for moving forward with computational models of both quantum health behavior and health behavior in general.

  9. A systematic review to identify areas of enhancements of pandemic simulation models for operational use at provincial and local levels

    PubMed Central

    2012-01-01

    Background In recent years, computer simulation models have supported development of pandemic influenza preparedness policies. However, U.S. policymakers have raised several concerns about the practical use of these models. In this review paper, we examine the extent to which the current literature already addresses these concerns and identify means of enhancing the current models for higher operational use. Methods We surveyed PubMed and other sources for published research literature on simulation models for influenza pandemic preparedness. We identified 23 models published between 1990 and 2010 that consider single-region (e.g., country, province, city) outbreaks and multi-pronged mitigation strategies. We developed a plan for examination of the literature based on the concerns raised by the policymakers. Results While examining the concerns about the adequacy and validity of data, we found that though the epidemiological data supporting the models appears to be adequate, it should be validated through as many updates as possible during an outbreak. Demographical data must improve its interfaces for access, retrieval, and translation into model parameters. Regarding the concern about credibility and validity of modeling assumptions, we found that the models often simplify reality to reduce computational burden. Such simplifications may be permissible if they do not interfere with the performance assessment of the mitigation strategies. We also agreed with the concern that social behavior is inadequately represented in pandemic influenza models. Our review showed that the models consider only a few social-behavioral aspects including contact rates, withdrawal from work or school due to symptoms appearance or to care for sick relatives, and compliance to social distancing, vaccination, and antiviral prophylaxis. The concern about the degree of accessibility of the models is palpable, since we found three models that are currently accessible by the public while other models are seeking public accessibility. Policymakers would prefer models scalable to any population size that can be downloadable and operable in personal computers. But scaling models to larger populations would often require computational needs that cannot be handled with personal computers and laptops. As a limitation, we state that some existing models could not be included in our review due to their limited available documentation discussing the choice of relevant parameter values. Conclusions To adequately address the concerns of the policymakers, we need continuing model enhancements in critical areas including: updating of epidemiological data during a pandemic, smooth handling of large demographical databases, incorporation of a broader spectrum of social-behavioral aspects, updating information for contact patterns, adaptation of recent methodologies for collecting human mobility data, and improvement of computational efficiency and accessibility. PMID:22463370

  10. Mathematical and Computational Modeling in Complex Biological Systems

    PubMed Central

    Li, Wenyang; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558

  11. Mathematical and Computational Modeling in Complex Biological Systems.

    PubMed

    Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.

  12. Light extraction in planar light-emitting diode with nonuniform current injection: model and simulation.

    PubMed

    Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei

    2014-07-20

    We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.

  13. Relationship of the interplanetary electric field to the high-latitude ionospheric electric field and currents Observations and model simulation

    NASA Technical Reports Server (NTRS)

    Clauer, C. R.; Banks, P. M.

    1986-01-01

    The electrical coupling between the solar wind, magnetosphere, and ionosphere is studied. The coupling is analyzed using observations of high-latitude ion convection measured by the Sondre Stromfjord radar in Greenland and a computer simulation. The computer simulation calculates the ionospheric electric potential distribution for a given configuration of field-aligned currents and conductivity distribution. The technique for measuring F-region in velocities at high time resolution over a large range of latitudes is described. Variations in the currents on ionospheric plasma convection are examined using a model of field-aligned currents linking the solar wind with the dayside, high-latitude ionosphere. The data reveal that high-latitude ionospheric convection patterns, electric fields, and field-aligned currents are dependent on IMF orientation; it is observed that the electric field, which drives the F-region plasma curve, responds within about 14 minutes to IMF variations in the magnetopause. Comparisons of the simulated plasma convection with the ion velocity measurements reveal good correlation between the data.

  14. Neural Network Optimization of Ligament Stiffnesses for the Enhanced Predictive Ability of a Patient-Specific, Computational Foot/Ankle Model.

    PubMed

    Chande, Ruchi D; Wayne, Jennifer S

    2017-09-01

    Computational models of diarthrodial joints serve to inform the biomechanical function of these structures, and as such, must be supplied appropriate inputs for performance that is representative of actual joint function. Inputs for these models are sourced from both imaging modalities as well as literature. The latter is often the source of mechanical properties for soft tissues, like ligament stiffnesses; however, such data are not always available for all the soft tissues nor is it known for patient-specific work. In the current research, a method to improve the ligament stiffness definition for a computational foot/ankle model was sought with the greater goal of improving the predictive ability of the computational model. Specifically, the stiffness values were optimized using artificial neural networks (ANNs); both feedforward and radial basis function networks (RBFNs) were considered. Optimal networks of each type were determined and subsequently used to predict stiffnesses for the foot/ankle model. Ultimately, the predicted stiffnesses were considered reasonable and resulted in enhanced performance of the computational model, suggesting that artificial neural networks can be used to optimize stiffness inputs.

  15. Predictive models in urology.

    PubMed

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  16. Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.

    PubMed

    Chen, Duan

    2017-11-01

    In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.

  17. Human-computer interaction in multitask situations

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1977-01-01

    Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.

  18. A European Flagship Programme on Extreme Computing and Climate

    NASA Astrophysics Data System (ADS)

    Palmer, Tim

    2017-04-01

    In 2016, an outline proposal co-authored by a number of leading climate modelling scientists from around Europe for a (c. 1 billion euro) flagship project on exascale computing and high-resolution global climate modelling was sent to the EU via its Future and Emerging Flagship Technologies Programme. The project is formally entitled "A Flagship European Programme on Extreme Computing and Climate (EPECC)"? In this talk I will outline the reasons why I believe such a project is needed and describe the current status of the project. I will leave time for some discussion.

  19. Application of technology developed for flight simulation at NASA. Langley Research Center

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1991-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.

  20. Analysis of Compression Pad Cavities for the Orion Heatshield

    NASA Technical Reports Server (NTRS)

    Thompson, Richard A.; Lessard, Victor R.; Jentink, Thomas N.; Zoby, Ernest V.

    2009-01-01

    Current results of a program for analysis of the compression pad cavities on the Orion heatshield are reviewed. The program was supported by experimental tests, engineering modeling, and applied computations with an emphasis on the latter presented in this paper. The computational tools and approach are described along with calculated results for wind tunnel and flight conditions. Correlations of the computed results are shown which can produce a credible prediction of heating augmentation due to cavity disturbances. The models developed for use in preliminary design of the Orion heatshield are presented.

  1. An assessment and application of turbulence models for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.; Viegas, J. R.; Huang, P. G.; Rubesin, M. W.

    1990-01-01

    The current approach to the Accurate Computation of Complex high-speed flows is to solve the Reynolds averaged Navier-Stokes equations using finite difference methods. An integral part of this approach consists of development and applications of mathematical turbulence models which are necessary in predicting the aerothermodynamic loads on the vehicle and the performance of the propulsion plant. Computations of several high speed turbulent flows using various turbulence models are described and the models are evaluated by comparing computations with the results of experimental measurements. The cases investigated include flows over insulated and cooled flat plates with Mach numbers ranging from 2 to 8 and wall temperature ratios ranging from 0.2 to 1.0. The turbulence models investigated include zero-equation, two-equation, and Reynolds-stress transport models.

  2. Intervertebral reaction force prediction using an enhanced assembly of OpenSim models.

    PubMed

    Senteler, Marco; Weisse, Bernhard; Rothenfluh, Dominique A; Snedeker, Jess G

    2016-01-01

    OpenSim offers a valuable approach to investigating otherwise difficult to assess yet important biomechanical parameters such as joint reaction forces. Although the range of available models in the public repository is continually increasing, there currently exists no OpenSim model for the computation of intervertebral joint reactions during flexion and lifting tasks. The current work combines and improves elements of existing models to develop an enhanced model of the upper body and lumbar spine. Models of the upper body with extremities, neck and head were combined with an improved version of a lumbar spine from the model repository. Translational motion was enabled for each lumbar vertebrae with six controllable degrees of freedom. Motion segment stiffness was implemented at lumbar levels and mass properties were assigned throughout the model. Moreover, body coordinate frames of the spine were modified to allow straightforward variation of sagittal alignment and to simplify interpretation of results. Evaluation of model predictions for level L1-L2, L3-L4 and L4-L5 in various postures of forward flexion and moderate lifting (8 kg) revealed an agreement within 10% to experimental studies and model-based computational analyses. However, in an extended posture or during lifting of heavier loads (20 kg), computed joint reactions differed substantially from reported in vivo measures using instrumented implants. We conclude that agreement between the model and available experimental data was good in view of limitations of both the model and the validation datasets. The presented model is useful in that it permits computation of realistic lumbar spine joint reaction forces during flexion and moderate lifting tasks. The model and corresponding documentation are now available in the online OpenSim repository.

  3. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.

  4. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  5. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  6. Experimental investigation and numerical modelling of positive corona discharge: ozone generation

    NASA Astrophysics Data System (ADS)

    Yanallah, K; Pontiga, F; Fernández-Rueda, A; Castellanos, A

    2009-03-01

    The spatial distribution of the species generated in a wire-cylinder positive corona discharge in pure oxygen has been computed using a plasma chemistry model that includes the most significant reactions between electrons, ions, atoms and molecules. The plasma chemistry model is included in the continuity equations of each species, which are coupled with Poisson's equation for the electric field and the energy conservation equation for the gas temperature. The current-voltage characteristic measured in the experiments has been used as an input data to the numerical simulation. The numerical model is able to reproduce the basic structure of the positive corona discharge and highlights the importance of Joule heating on ozone generation. The average ozone density has been computed as a function of current intensity and compared with the experimental measurements of ozone concentration determined by UV absorption spectroscopy.

  7. Design of an air traffic computer simulation system to support investigation of civil tiltrotor aircraft operations

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1992-01-01

    This research project addresses the need to provide an efficient and safe mechanism to investigate the effects and requirements of the tiltrotor aircraft's commercial operations on air transportation infrastructures, particularly air traffic control. The mechanism of choice is computer simulation. Unfortunately, the fundamental paradigms of the current air traffic control simulation models do not directly support the broad range of operational options and environments necessary to study tiltrotor operations. Modification of current air traffic simulation models to meet these requirements does not appear viable given the range and complexity of issues needing resolution. As a result, the investigation of systemic, infrastructure issues surrounding the effects of tiltrotor commercial operations requires new approaches to simulation modeling. These models should be based on perspectives and ideas closer to those associated with tiltrotor air traffic operations.

  8. The Mechanics of Embodiment: A Dialog on Embodiment and Computational Modeling

    PubMed Central

    Pezzulo, Giovanni; Barsalou, Lawrence W.; Cangelosi, Angelo; Fischer, Martin H.; McRae, Ken; Spivey, Michael J.

    2011-01-01

    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamoring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensorimotor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialog between two fictional characters: Ernest, the “experimenter,” and Mary, the “computational modeler.” The dialog consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modeling. PMID:21713184

  9. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  10. Numerical simulation of dynamics of brushless dc motors for aerospace and other applications. Volume 2: User's guide to computer EMA model

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A. O.; Nehl, T. W.

    1979-01-01

    A description and user's guide of the computer program developed to simulate the dynamics of an electromechanical actuator for aerospace applications are presented. The effects of the stator phase currents on the permanent magnets of the rotor are examined. The voltage and current waveforms present in the power conditioner network during the motoring, regenerative braking, and plugging modes of operation are presented and discussed.

  11. Model falsifiability and climate slow modes

    NASA Astrophysics Data System (ADS)

    Essex, Christopher; Tsonis, Anastasios A.

    2018-07-01

    The most advanced climate models are actually modified meteorological models attempting to capture climate in meteorological terms. This seems a straightforward matter of raw computing power applied to large enough sources of current data. Some believe that models have succeeded in capturing climate in this manner. But have they? This paper outlines difficulties with this picture that derive from the finite representation of our computers, and the fundamental unavailability of future data instead. It suggests that alternative windows onto the multi-decadal timescales are necessary in order to overcome the issues raised for practical problems of prediction.

  12. Development of a Strain Rate Dependent Long Bone Injury Criterion for Use with the ATB Model.

    DTIC Science & Technology

    1982-01-12

    testing of this computer model and has applied it to the analysis of the response of pilots to ejection from jet aircraft. During these events the body is...acceleration profiles , restraint systems and other variables as to their injury preventing potential. Currently these assessments must be made, in a very...fractures, it is of particular interest to estimate the likelihood of long bone fracture. (It should be noted that a separate computer model, the

  13. A methodology for achieving high-speed rates for artificial conductance injection in electrically excitable biological cells.

    PubMed

    Butera, R J; Wilson, C G; Delnegro, C A; Smith, J C

    2001-12-01

    We present a novel approach to implementing the dynamic-clamp protocol (Sharp et al., 1993), commonly used in neurophysiology and cardiac electrophysiology experiments. Our approach is based on real-time extensions to the Linux operating system. Conventional PC-based approaches have typically utilized single-cycle computational rates of 10 kHz or slower. In thispaper, we demonstrate reliable cycle-to-cycle rates as fast as 50 kHz. Our system, which we call model reference current injection (MRCI); pronounced merci is also capable of episodic logging of internal state variables and interactive manipulation of model parameters. The limiting factor in achieving high speeds was not processor speed or model complexity, but cycle jitter inherent in the CPU/motherboard performance. We demonstrate these high speeds and flexibility with two examples: 1) adding action-potential ionic currents to a mammalian neuron under whole-cell patch-clamp and 2) altering a cell's intrinsic dynamics via MRCI while simultaneously coupling it via artificial synapses to an internal computational model cell. These higher rates greatly extend the applicability of this technique to the study of fast electrophysiological currents such fast a currents and fast excitatory/inhibitory synapses.

  14. Tertiary structure-based analysis of microRNA–target interactions

    PubMed Central

    Gan, Hin Hark; Gunsalus, Kristin C.

    2013-01-01

    Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009

  15. How Do Tides and Tsunamis Interact in a Highly Energetic Channel? The Case of Canal Chacao, Chile

    NASA Astrophysics Data System (ADS)

    Winckler, Patricio; Sepúlveda, Ignacio; Aron, Felipe; Contreras-López, Manuel

    2017-12-01

    This study aims at understanding the role of tidal level, speed, and direction in tsunami propagation in highly energetic tidal channels. The main goal is to comprehend whether tide-tsunami interactions enhance/reduce elevation, currents speeds, and arrival times, when compared to pure tsunami models and to simulations in which tides and tsunamis are linearly superimposed. We designed various numerical experiments to compute the tsunami propagation along Canal Chacao, a highly energetic channel in the Chilean Patagonia lying on a subduction margin prone to megathrust earthquakes. Three modeling approaches were implemented under the same seismic scenario: a tsunami model with a constant tide level, a series of six composite models in which independent tide and tsunami simulations are linearly superimposed, and a series of six tide-tsunami nonlinear interaction models (full models). We found that hydrodynamic patterns differ significantly among approaches, being the composite and full models sensitive to both the tidal phase at which the tsunami is triggered and the local depth of the channel. When compared to full models, composite models adequately predicted the maximum surface elevation, but largely overestimated currents. The amplitude and arrival time of the tsunami-leading wave computed with the full model was found to be strongly dependent on the direction of the tidal current and less responsive to the tide level and the tidal current speed. These outcomes emphasize the importance of addressing more carefully the interactions of tides and tsunamis on hazard assessment studies.

  16. The possibility of coexistence and co-development in language competition: ecology-society computational model and simulation.

    PubMed

    Yun, Jian; Shang, Song-Chao; Wei, Xiao-Dan; Liu, Shuang; Li, Zhi-Jie

    2016-01-01

    Language is characterized by both ecological properties and social properties, and competition is the basic form of language evolution. The rise and decline of one language is a result of competition between languages. Moreover, this rise and decline directly influences the diversity of human culture. Mathematics and computer modeling for language competition has been a popular topic in the fields of linguistics, mathematics, computer science, ecology, and other disciplines. Currently, there are several problems in the research on language competition modeling. First, comprehensive mathematical analysis is absent in most studies of language competition models. Next, most language competition models are based on the assumption that one language in the model is stronger than the other. These studies tend to ignore cases where there is a balance of power in the competition. The competition between two well-matched languages is more practical, because it can facilitate the co-development of two languages. A third issue with current studies is that many studies have an evolution result where the weaker language inevitably goes extinct. From the integrated point of view of ecology and sociology, this paper improves the Lotka-Volterra model and basic reaction-diffusion model to propose an "ecology-society" computational model for describing language competition. Furthermore, a strict and comprehensive mathematical analysis was made for the stability of the equilibria. Two languages in competition may be either well-matched or greatly different in strength, which was reflected in the experimental design. The results revealed that language coexistence, and even co-development, are likely to occur during language competition.

  17. PROTO-PLASM: parallel language for adaptive and scalable modelling of biosystems.

    PubMed

    Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto

    2008-09-13

    This paper discusses the design goals and the first developments of PROTO-PLASM, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the PROTO-PLASM platform is still in its infancy. Its computational framework--language, model library, integrated development environment and parallel engine--intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. PROTO-PLASM may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a PROTO-PLASM program. Here we exemplify the basic functionalities of PROTO-PLASM, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions.

  18. Proto-Plasm: parallel language for adaptive and scalable modelling of biosystems

    PubMed Central

    Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto

    2008-01-01

    This paper discusses the design goals and the first developments of Proto-Plasm, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the Proto-Plasm platform is still in its infancy. Its computational framework—language, model library, integrated development environment and parallel engine—intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. Proto-Plasm may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a Proto-Plasm program. Here we exemplify the basic functionalities of Proto-Plasm, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions. PMID:18559320

  19. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  20. Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.

    PubMed

    Lee, Seungcheol Austin; Liang, Yuhua Jake

    2015-04-01

    Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.

  1. Emission of neutron–proton and proton–proton pairs in neutrino scattering

    DOE PAGES

    Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...

    2016-11-10

    For this paper, we use a recently developed model of relativistic meson-exchange currents to compute the neutron–proton and proton–proton yields in (νμ, μ -)scattering from 12C in the 2p–2h channel. We compute the response functions and cross sections with the relativistic Fermi gas model for different kinematics from intermediate to high momentum transfers. We find a large contribution of neutron–proton configurations in the initial state, as compared to proton–proton pairs. In the case of charge-changing neutrino scattering the 2p–2h cross section of proton–proton emission (i.e.,np in the initial state) is much larger than for neutron–proton emission (i.e.,two neutrons in themore » initial state) by a (ω, q)-dependent factor. The different emission probabilities of distinct species of nucleon pairs are produced in our model only by meson-exchange currents, mainly by the Δ isobar current. We also analyze other effects including exchange contributions and the effect of the axial and vector currents.« less

  2. Electrode Position and Current Amplitude Modulate Impulsivity after Subthalamic Stimulation in Parkinsons Disease—A Computational Study

    PubMed Central

    Mandali, Alekhya; Chakravarthy, V. Srinivasa; Rajan, Roopa; Sarma, Sankara; Kishore, Asha

    2016-01-01

    Background: Subthalamic Nucleus Deep Brain Stimulation (STN-DBS) is highly effective in alleviating motor symptoms of Parkinson's disease (PD) which are not optimally controlled by dopamine replacement therapy. Clinical studies and reports suggest that STN-DBS may result in increased impulsivity and de novo impulse control disorders (ICD). Objective/Hypothesis: We aimed to compare performance on a decision making task, the Iowa Gambling Task (IGT), in healthy conditions (HC), untreated and medically-treated PD conditions with and without STN stimulation. We hypothesized that the position of electrode and stimulation current modulate impulsivity after STN-DBS. Methods: We built a computational spiking network model of basal ganglia (BG) and compared the model's STN output with STN activity in PD. Reinforcement learning methodology was applied to simulate IGT performance under various conditions of dopaminergic and STN stimulation where IGT total and bin scores were compared among various conditions. Results: The computational model reproduced neural activity observed in normal and PD conditions. Untreated and medically-treated PD conditions had lower total IGT scores (higher impulsivity) compared to HC (P < 0.0001). The electrode position that happens to selectively stimulate the part of the STN corresponding to an advantageous panel on IGT resulted in de-selection of that panel and worsening of performance (P < 0.0001). Supratherapeutic stimulation amplitudes also worsened IGT performance (P < 0.001). Conclusion(s): In our computational model, STN stimulation led to impulsive decision making in IGT in PD condition. Electrode position and stimulation current influenced impulsivity which may explain the variable effects of STN-DBS reported in patients. PMID:27965590

  3. Computational Approach for Improving Three-Dimensional Sub-Surface Earth Structure for Regional Earthquake Hazard Simulations in the San Francisco Bay Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodgers, A. J.

    In our Exascale Computing Project (ECP) we seek to simulate earthquake ground motions at much higher frequency than is currently possible. Previous simulations in the SFBA were limited to 0.5-1 Hz or lower (Aagaard et al. 2008, 2010), while we have recently simulated the response to 5 Hz. In order to improve confidence in simulated ground motions, we must accurately represent the three-dimensional (3D) sub-surface material properties that govern seismic wave propagation over a broad region. We are currently focusing on the San Francisco Bay Area (SFBA) with a Cartesian domain of size 120 x 80 x 35 km, butmore » this area will be expanded to cover a larger domain. Currently, the United States Geologic Survey (USGS) has a 3D model of the SFBA for seismic simulations. However, this model suffers from two serious shortcomings relative to our application: 1) it does not fit most of the available low frequency (< 1 Hz) seismic waveforms from moderate (magnitude M 3.5-5.0) earthquakes; and 2) it is represented with much lower resolution than necessary for the high frequency simulations (> 5 Hz) we seek to perform. The current model will serve as a starting model for full waveform tomography based on 3D sensitivity kernels. This report serves as the deliverable for our ECP FY2017 Quarter 4 milestone to FY 2018 “Computational approach to developing model updates”. We summarize the current state of 3D seismic simulations in the SFBA and demonstrate the performance of the USGS 3D model for a few selected paths. We show the available open-source waveform data sets for model updates, based on moderate earthquakes recorded in the region. We present a plan for improving the 3D model utilizing the available data and further development of our SW4 application. We project how the model could be improved and present options for further improvements focused on the shallow geotechnical layers using dense passive recordings of ambient and human-induced noise.« less

  4. Current status of computational methods for transonic unsteady aerodynamics and aeroelastic applications

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Malone, John B.

    1992-01-01

    The current status of computational methods for unsteady aerodynamics and aeroelasticity is reviewed. The key features of challenging aeroelastic applications are discussed in terms of the flowfield state: low-angle high speed flows and high-angle vortex-dominated flows. The critical role played by viscous effects in determining aeroelastic stability for conditions of incipient flow separation is stressed. The need for a variety of flow modeling tools, from linear formulations to implementations of the Navier-Stokes equations, is emphasized. Estimates of computer run times for flutter calculations using several computational methods are given. Applications of these methods for unsteady aerodynamic and transonic flutter calculations for airfoils, wings, and configurations are summarized. Finally, recommendations are made concerning future research directions.

  5. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to provide inputs for the next ionospheic model time step and then stored in a MySQL database as the first part of the time-specific record. The RMM then performs synchronization of the input times with the current model time, prepares a decision on initialization for the next model time step, and monitors its execution. Then, as soon as the model completes computations for the next time step, RMM visualizes the current model output into various short-term (about 1-2 hours) forecasting products and compares prior results with available ionospheric measurements. The RMM places prepared images into the MySQL database, which can be located on a different computer node, and then proceeds to the next time interval continuing the time-loop. The upper-level interface of this real-time system is the a PHP-based Web site (http://www.arsc.edu/SpaceWeather/new). This site provides general information about the Earth polar and adjacent mid-latitude ionosphere, allows for monitoring of the current developments and short-term forecasts, and facilitates access to the comparisons archive stored in the database.

  6. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    The Poloidal Diverter Experiment (PDX) facility at Princeton University is the first operating tokamak to require substantial radiation shielding. A calculational model has been developed to estimate the radiation dose in the PDX control room and at the site boundary due to the skyshine effect. An efficient one-dimensional method is used to compute the neutron and capture gamma leakage currents at the top surface of the PDX roof shield. This method employs an S /SUB n/ calculation in slab geometry and, for the PDX, is superior to spherical models found in the literature. If certain conditions are met, the slabmore » model provides the exact probability of leakage out the top surface of the roof for fusion source neutrons and for capture gamma rays produced in the PDX floor and roof shield. The model also provides the correct neutron and capture gamma leakage current spectra and angular distributions, averaged over the top roof shield surface. For the PDX, this method is nearly as accurate as multidimensional techniques for computing the roof leakage and is much less costly. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab S /SUB n/ calculation. The capture gamma dose is computed using a simple point-kernel single-scatter method.« less

  7. General Information: Chapman Conference on Magnetospheric Current Systems

    NASA Technical Reports Server (NTRS)

    Spicer, Daniel S.; Curtis, Steven

    1999-01-01

    The goal of this conference is to address recent achievements of observational, computational, theoretical, and modeling studies, and to foster communication among people working with different approaches. Electric current systems play an important role in the energetics of the magnetosphere. This conference will target outstanding issues related to magnetospheric current systems, placing its emphasis on interregional processes and driving mechanisms of current systems.

  8. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    NASA Technical Reports Server (NTRS)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  9. High sensitivity of spontaneous spike frequency to sodium leak current in a Lymnaea pacemaker neuron.

    PubMed

    Lu, T Z; Kostelecki, W; Sun, C L F; Dong, N; Pérez Velázquez, J L; Feng, Z-P

    2016-12-01

    The spontaneous rhythmic firing of action potentials in pacemaker neurons depends on the biophysical properties of voltage-gated ion channels and background leak currents. The background leak current includes a large K + and a small Na + component. We previously reported that a Na + -leak current via U-type channels is required to generate spontaneous action potential firing in the identified respiratory pacemaker neuron, RPeD1, in the freshwater pond snail Lymnaea stagnalis. We further investigated the functional significance of the background Na + current in rhythmic spiking of RPeD1 neurons. Whole-cell patch-clamp recording and computational modeling approaches were carried out in isolated RPeD1 neurons. The whole-cell current of the major ion channel components in RPeD1 neurons were characterized, and a conductance-based computational model of the rhythmic pacemaker activity was simulated with the experimental measurements. We found that the spiking rate is more sensitive to changes in the Na + leak current as compared to the K + leak current, suggesting a robust function of Na + leak current in regulating spontaneous neuronal firing activity. Our study provides new insight into our current understanding of the role of Na + leak current in intrinsic properties of pacemaker neurons. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. A multimedia adult literacy program: Combining NASA technology, instructional design theory, and authentic literacy concepts

    NASA Technical Reports Server (NTRS)

    Willis, Jerry W.

    1993-01-01

    For a number of years, the Software Technology Branch of the Information Systems Directorate has been involved in the application of cutting edge hardware and software technologies to instructional tasks related to NASA projects. The branch has developed intelligent computer aided training shells, instructional applications of virtual reality and multimedia, and computer-based instructional packages that use fuzzy logic for both instructional and diagnostic decision making. One outcome of the work on space-related technology-supported instruction has been the creation of a significant pool of human talent in the branch with current expertise on the cutting edges of instructional technologies. When the human talent is combined with advanced technologies for graphics, sound, video, CD-ROM, and high speed computing, the result is a powerful research and development group that both contributes to the applied foundations of instructional technology and creates effective instructional packages that take advantage of a range of advanced technologies. Several branch projects are currently underway that combine NASA-developed expertise to significant instructional problems in public education. The branch, for example, has developed intelligent computer aided software to help high school students learn physics and staff are currently working on a project to produce educational software for young children with language deficits. This report deals with another project, the adult literacy tutor. Unfortunately, while there are a number of computer-based instructional packages available for adult literacy instruction, most of them are based on the same instructional models that failed these students when they were in school. The teacher-centered, discrete skill and drill-oriented, instructional strategies, even when they are supported by color computer graphics and animation, that form the foundation for most of the computer-based literacy packages currently on the market may not be the most effective or most desirable way to use computer technology in literacy programs. This project is developing a series of instructional packages that are based on a different instructional model - authentic instruction. The instructional development model used to create these packages is also different. Instead of using the traditional five stage linear, sequential model based on behavioral learning theory, the project uses the recursive, reflective design and development model (R2D2) that is based on cognitive learning theory, particularly the social constructivism of Vygotsky, and an epistemology based on critical theory. Using alternative instructional and instructional development theories, the result of the summer faculty fellowship is LiteraCity, a multimedia adult literacy instructional package that is a simulation of finding and applying for a job. The program, which is about 120 megabytes, is distributed on CD-ROM.

  11. Computer-Aided Drug Design in Epigenetics

    NASA Astrophysics Data System (ADS)

    Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng

    2018-03-01

    Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field.

  12. Computer-Aided Drug Design in Epigenetics

    PubMed Central

    Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng

    2018-01-01

    Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation, and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field. PMID:29594101

  13. Facing the challenges of multiscale modelling of bacterial and fungal pathogen–host interactions

    PubMed Central

    Schleicher, Jana; Conrad, Theresia; Gustafsson, Mika; Cedersund, Gunnar; Guthke, Reinhard

    2017-01-01

    Abstract Recent and rapidly evolving progress on high-throughput measurement techniques and computational performance has led to the emergence of new disciplines, such as systems medicine and translational systems biology. At the core of these disciplines lies the desire to produce multiscale models: mathematical models that integrate multiple scales of biological organization, ranging from molecular, cellular and tissue models to organ, whole-organism and population scale models. Using such models, hypotheses can systematically be tested. In this review, we present state-of-the-art multiscale modelling of bacterial and fungal infections, considering both the pathogen and host as well as their interaction. Multiscale modelling of the interactions of bacteria, especially Mycobacterium tuberculosis, with the human host is quite advanced. In contrast, models for fungal infections are still in their infancy, in particular regarding infections with the most important human pathogenic fungi, Candida albicans and Aspergillus fumigatus. We reflect on the current availability of computational approaches for multiscale modelling of host–pathogen interactions and point out current challenges. Finally, we provide an outlook for future requirements of multiscale modelling. PMID:26857943

  14. Research in nonlinear structural and solid mechanics

    NASA Technical Reports Server (NTRS)

    Mccomb, H. G., Jr. (Compiler); Noor, A. K. (Compiler)

    1981-01-01

    Recent and projected advances in applied mechanics, numerical analysis, computer hardware and engineering software, and their impact on modeling and solution techniques in nonlinear structural and solid mechanics are discussed. The fields covered are rapidly changing and are strongly impacted by current and projected advances in computer hardware. To foster effective development of the technology perceptions on computing systems and nonlinear analysis software systems are presented.

  15. Architectural Implications of Cloud Computing

    DTIC Science & Technology

    2011-10-24

    Public Cloud Infrastructure-as-a- Service (IaaS) Software -as-a- Service ( SaaS ) Cloud Computing Types Platform-as-a- Service (PaaS) Based on Type of...Twitter #SEIVirtualForum © 2011 Carnegie Mellon University Software -as-a- Service ( SaaS ) Model of software deployment in which a third-party...and System Solutions (RTSS) Program. Her current interests and projects are in service -oriented architecture (SOA), cloud computing, and context

  16. Novel 3-D Computer Model Can Help Predict Pathogens’ Roles in Cancer | Poster

    Cancer.gov

    To understand how bacterial and viral infections contribute to human cancers, four NCI at Frederick scientists turned not to the lab bench, but to a computer. The team has created the world’s first—and currently, only—3-D computational approach for studying interactions between pathogen proteins and human proteins based on a molecular adaptation known as interface mimicry.

  17. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  18. Facial Animations: Future Research Directions & Challenges

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    Nowadays, computer facial animation is used in a significant multitude fields that brought human and social to study the computer games, films and interactive multimedia reality growth. Authoring the computer facial animation, complex and subtle expressions are challenging and fraught with problems. As a result, the current most authored using universal computer animation techniques often limit the production quality and quantity of facial animation. With the supplement of computer power, facial appreciative, software sophistication and new face-centric methods emerging are immature in nature. Therefore, this paper concentrates to define and managerially categorize current and emerged surveyed facial animation experts to define the recent state of the field, observed bottlenecks and developing techniques. This paper further presents a real-time simulation model of human worry and howling with detail discussion about their astonish, sorrow, annoyance and panic perception.

  19. Computational Modeling of Arc-Slag Interaction in DC Furnaces

    NASA Astrophysics Data System (ADS)

    Reynolds, Quinn G.

    2017-02-01

    The plasma arc is central to the operation of the direct-current arc furnace, a unit operation commonly used in high-temperature processing of both primary ores and recycled metals. The arc is a high-velocity, high-temperature jet of ionized gas created and sustained by interactions among the thermal, momentum, and electromagnetic fields resulting from the passage of electric current. In addition to being the primary source of thermal energy, the arc jet also couples mechanically with the bath of molten process material within the furnace, causing substantial splashing and stirring in the region in which it impinges. The arc's interaction with the molten bath inside the furnace is studied through use of a multiphase, multiphysics computational magnetohydrodynamic model developed in the OpenFOAM® framework. Results from the computational solver are compared with empirical correlations that account for arc-slag interaction effects.

  20. Supercomputer requirements for selected disciplines important to aerospace

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1989-01-01

    Speed and memory requirements placed on supercomputers by five different disciplines important to aerospace are discussed and compared with the capabilities of various existing computers and those projected to be available before the end of this century. The disciplines chosen for consideration are turbulence physics, aerodynamics, aerothermodynamics, chemistry, and human vision modeling. Example results for problems illustrative of those currently being solved in each of the disciplines are presented and discussed. Limitations imposed on physical modeling and geometrical complexity by the need to obtain solutions in practical amounts of time are identified. Computational challenges for the future, for which either some or all of the current limitations are removed, are described. Meeting some of the challenges will require computer speeds in excess of exaflop/s (10 to the 18th flop/s) and memories in excess of petawords (10 to the 15th words).

  1. A theoretical analysis of the electromagnetic environment of the AS330 super Puma helicopter external and internal coupling

    NASA Technical Reports Server (NTRS)

    Flourens, F.; Morel, T.; Gauthier, D.; Serafin, D.

    1991-01-01

    Numerical techniques such as Finite Difference Time Domain (FDTD) computer programs, which were first developed to analyze the external electromagnetic environment of an aircraft during a wave illumination, a lightning event, or any kind of current injection, are now very powerful investigative tools. The program called GORFF-VE, was extended to compute the inner electromagnetic fields that are generated by the penetration of the outer fields through large apertures made in the all metallic body. Then, the internal fields can drive the electrical response of a cable network. The coupling between the inside and the outside of the helicopter is implemented using Huygen's principle. Moreover, the spectacular increase of computer resources, as calculations speed and memory capacity, allows the modellization structures as complex as these of helicopters with accuracy. This numerical model was exploited, first, to analyze the electromagnetic environment of an in-flight helicopter for several injection configurations, and second, to design a coaxial return path to simulate the lightning aircraft interaction with a strong current injection. The E field and current mappings are the result of these calculations.

  2. Case Study of a Computer Based Examination System

    ERIC Educational Resources Information Center

    Fluck, Andrew; Pullen, Darren; Harper, Colleen

    2009-01-01

    Electronic supported assessment or e-Assessment is a field of growing importance, but it has yet to make a significant impact in the Australian higher education sector (Byrnes & Ellis, 2006). Current computer based assessment models focus on the assessment of knowledge rather than deeper understandings, using multiple choice type questions,…

  3. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  4. Spatial Variation of Pressure in the Lyophilization Product Chamber Part 1: Computational Modeling.

    PubMed

    Ganguly, Arnab; Varma, Nikhil; Sane, Pooja; Bogner, Robin; Pikal, Michael; Alexeenko, Alina

    2017-04-01

    The flow physics in the product chamber of a freeze dryer involves coupled heat and mass transfer at different length and time scales. The low-pressure environment and the relatively small flow velocities make it difficult to quantify the flow structure experimentally. The current work presents the three-dimensional computational fluid dynamics (CFD) modeling for vapor flow in a laboratory scale freeze dryer validated with experimental data and theory. The model accounts for the presence of a non-condensable gas such as nitrogen or air using a continuum multi-species model. The flow structure at different sublimation rates, chamber pressures, and shelf-gaps are systematically investigated. Emphasis has been placed on accurately predicting the pressure variation across the subliming front. At a chamber set pressure of 115 mtorr and a sublimation rate of 1.3 kg/h/m 2 , the pressure variation reaches about 9 mtorr. The pressure variation increased linearly with sublimation rate in the range of 0.5 to 1.3 kg/h/m 2 . The dependence of pressure variation on the shelf-gap was also studied both computationally and experimentally. The CFD modeling results are found to agree within 10% with the experimental measurements. The computational model was also compared to analytical solution valid for small shelf-gaps. Thus, the current work presents validation study motivating broader use of CFD in optimizing freeze-drying process and equipment design.

  5. Approximate Optimal Control as a Model for Motor Learning

    ERIC Educational Resources Information Center

    Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.

    2005-01-01

    Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…

  6. NASA Iced Aerodynamics and Controls Current Research

    NASA Technical Reports Server (NTRS)

    Addy, Gene

    2009-01-01

    This slide presentation reviews the state of current research in the area of aerodynamics and aircraft control with ice conditions by the Aviation Safety Program, part of the Integrated Resilient Aircraft Controls Project (IRAC). Included in the presentation is a overview of the modeling efforts. The objective of the modeling is to develop experimental and computational methods to model and predict aircraft response during adverse flight conditions, including icing. The Aircraft icing modeling efforts includes the Ice-Contaminated Aerodynamics Modeling, which examines the effects of ice contamination on aircraft aerodynamics, and CFD modeling of ice-contaminated aircraft aerodynamics, and Advanced Ice Accretion Process Modeling which examines the physics of ice accretion, and works on computational modeling of ice accretions. The IRAC testbed, a Generic Transport Model (GTM) and its use in the investigation of the effects of icing on its aerodynamics is also reviewed. This has led to a more thorough understanding and models, both theoretical and empirical of icing physics and ice accretion for airframes, advanced 3D ice accretion prediction codes, CFD methods for iced aerodynamics and better understanding of aircraft iced aerodynamics and its effects on control surface effectiveness.

  7. Fundamental Algorithms of the Goddard Battery Model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1985-01-01

    The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.

  8. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development. Currently there is no fully coupled computational tool to analyze this fluid/structure interaction process. The objective of this study was to develop a fully coupled aeroelastic modeling capability to describe the fluid/structure interaction process during the transient nozzle operations. The aeroelastic model composes of three components: the computational fluid dynamics component based on an unstructured-grid, pressure-based computational fluid dynamics formulation, the computational structural dynamics component developed in the framework of modal analysis, and the fluid-structural interface component. The developed aeroelastic model was applied to the transient nozzle startup process of the Space Shuttle Main Engine at sea level. The computed nozzle side loads and the axial nozzle wall pressure profiles from the aeroelastic nozzle are compared with those of the published rigid nozzle results, and the impact of the fluid/structure interaction on nozzle side loads is interrogated and presented.

  9. Supersonic reacting internal flowfields

    NASA Astrophysics Data System (ADS)

    Drummond, J. P.

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  10. Supersonic reacting internal flow fields

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1989-01-01

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  11. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminarymore » theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.« less

  12. Carbon monoxide screen for signalized intersections COSIM, version 3.0 : technical documentation.

    DOT National Transportation Integrated Search

    2008-07-01

    The Illinois Department of Transportation (IDOT) currently uses the computer screening model Illinois : CO Screen for Intersection Modeling (COSIM) to estimate worst-case CO concentrations for proposed roadway : projects affecting signalized intersec...

  13. Calibration of controlling input models for pavement management system.

    DOT National Transportation Integrated Search

    2013-07-01

    The Oklahoma Department of Transportation (ODOT) is currently using the Deighton Total Infrastructure Management System (dTIMS) software for pavement management. This system is based on several input models which are computational backbones to dev...

  14. COMPILATION OF GROUND WATER MODELS

    EPA Science Inventory

    The full report presents an overview of currently available computer-based simulation models for ground-water flow, solute and heat transport, and hydrogeochemistry in both porous media and fractured rock. Separate sections address multiphase flow and related chemical species tra...

  15. National Mobile Inventory Model (NMIM)

    EPA Pesticide Factsheets

    The National Mobile Inventory Model (NMIM) is a free, desktop computer application developed by EPA to help you develop estimates of current and future emission inventories for on-road motor vehicles and nonroad equipment. To learn more search the archive

  16. Improved models of cable-to-post attachments for high-tension cable barriers.

    DOT National Transportation Integrated Search

    2012-05-01

    Computer simulation models were developed to analyze and evaluate a new cable-to-post attachment for high-tension cable : barriers. The models replicated the performance of a keyway bolt currently used in the design of a high-tension cable : median b...

  17. Use of the Maximum Likelihood Method in the Analysis of Chamber Air Dives

    DTIC Science & Technology

    1988-01-01

    the total gas pressure in compartment i, P0 is the current ambient pressure, 0 [ and A and B are constants (0.0026 min-’ -ATA- and 8.31 ATA...computer model (4), the Kidd- Stubbs 1971 decompression tables (11), and the current Defence and Civil Institute 20 of Environmental Medicine (DCIEM...it could be applied. Since the models are not suitable for this test, then within T ese no-deco current limits of statistical theory, the results can

  18. An Infrared Camera Simulation for Estimating Spatial Temperature Profiles and Signal-to-Noise Ratios of an Airborne Laser-Illuminated Target

    DTIC Science & Technology

    2007-06-01

    of SNR, she incorporated the effects that an InGaAs photovoltaic detector have in producing the signal along with the photon, Johnson, and shot noises ...the photovoltaic FPA detector modeled? • What detector noise sources limit the computed signal? 3.1 Modeling Methodology Two aspects in the IR camera...Another shot noise source in photovoltaic detectors is dark current. This current represents the current flowing in the detector when no optical radiation

  19. Exemplar for simulation challenges: Large-deformation micromechanics of Sylgard 184/glass microballoon syntactic foams.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Judith Alice; Long, Kevin Nicholas

    2018-05-01

    Sylgard® 184/Glass Microballoon (GMB) potting material is currently used in many NW systems. Analysts need a macroscale constitutive model that can predict material behavior under complex loading and damage evolution. To address this need, ongoing modeling and experimental efforts have focused on study of damage evolution in these materials. Micromechanical finite element simulations that resolve individual GMB and matrix components promote discovery and better understanding of the material behavior. With these simulations, we can study the role of the GMB volume fraction, time-dependent damage, behavior under confined vs. unconfined compression, and the effects of partial damage. These simulations are challengingmore » and push the boundaries of capability even with the high performance computing tools available at Sandia. We summarize the major challenges and the current state of this modeling effort, as an exemplar of micromechanical modeling needs that can motivate advances in future computing efforts.« less

  20. An Assessment of the Dyna-Metric Inventory Model during Initial Provisioning.

    DTIC Science & Technology

    1986-09-01

    model for computing initial spares levels. Currently, Air Force 3 policy for the provisioning of initial spares and repair parts requires that "all...Force inventory. The key to an effective inventory policy , and a credible defense posture in times of a constrained budget, is to maximize the repair... policy and procedures for deciding which items qualify for stockage, and for computing new requirements for all types of initially provisioned items

  1. Pre-launch Optical Characteristics of the Oculus-ASR Nanosatellite for Attitude and Shape Recognition Experiments

    DTIC Science & Technology

    2011-12-02

    construction and validation of predictive computer models such as those used in Time-domain Analysis Simulation for Advanced Tracking (TASAT), a...characterization data, successful construction and validation of predictive computer models was accomplished. And an investigation in pose determination from...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES

  2. A New Formulation for Hybrid LES-RANS Computations

    NASA Technical Reports Server (NTRS)

    Woodruff, Stephen L.

    2013-01-01

    Ideally, a hybrid LES-RANS computation would employ LES only where necessary to make up for the failure of the RANS model to provide sufficient accuracy or to provide time-dependent information. Current approaches are fairly restrictive in the placement of LES and RANS regions; an LES-RANS transition in a boundary layer, for example, yields an unphysical log-layer shift. A hybrid computation is formulated here to allow greater control over the placement of LES and RANS regions and the transitions between them. The concept of model invariance is introduced, which provides a basis for interpreting hybrid results within an LES-RANS transition zone. Consequences of imposing model invariance include the addition of terms to the governing equations that compensate for unphysical gradients created as the model changes between RANS and LES. Computational results illustrate the increased accuracy of the approach and its insensitivity to the location of the transition and to the blending function employed.

  3. Task allocation model for minimization of completion time in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Wang, Jai-Ping; Steidley, Carl W.

    1993-08-01

    A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.

  4. Allen Newell's Program of Research: The Video-Game Test.

    PubMed

    Gobet, Fernand

    2017-04-01

    Newell (1973) argued that progress in psychology was slow because research focused on experiments trying to answer binary questions, such as serial versus parallel processing. In addition, not enough attention was paid to the strategies used by participants, and there was a lack of theories implemented as computer models offering sufficient precision for being tested rigorously. He proposed a three-headed research program: to develop computational models able to carry out the task they aimed to explain; to study one complex task in detail, such as chess; and to build computational models that can account for multiple tasks. This article assesses the extent to which the papers in this issue advance Newell's program. While half of the papers devote much attention to strategies, several papers still average across them, a capital sin according to Newell. The three courses of action he proposed were not popular in these papers: Only two papers used computational models, with no model being both able to carry out the task and to account for human data; there was no systematic analysis of a specific video game; and no paper proposed a computational model accounting for human data in several tasks. It is concluded that, while they use sophisticated methods of analysis and discuss interesting results, overall these papers contribute only little to Newell's program of research. In this respect, they reflect the current state of psychology and cognitive science. This is a shame, as Newell's ideas might help address the current crisis of lack of replication and fraud in psychology. Copyright © 2017 The Author. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  5. Plasma Science and Innovation Center at Washington, Wisconsin, and Utah State: Final Scientific Report for the University of Wisconsin-Madison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sovinec, Carl R.

    The University of Wisconsin-Madison component of the Plasma Science and Innovation Center (PSI Center) contributed to modeling capabilities and algorithmic efficiency of the Non-Ideal Magnetohydrodynamics with Rotation (NIMROD) Code, which is widely used to model macroscopic dynamics of magnetically confined plasma. It also contributed to the understanding of direct-current (DC) injection of electrical current for initiating and sustaining plasma in three spherical torus experiments: the Helicity Injected Torus-II (HIT-II), the Pegasus Toroidal Experiment, and the National Spherical Torus Experiment (NSTX). The effort was funded through the PSI Center's cooperative agreement with the University of Washington and Utah State University overmore » the period of March 1, 2005 - August 31, 2016. In addition to the computational and physics accomplishments, the Wisconsin effort contributed to the professional education of four graduate students and two postdoctoral research associates. The modeling for HIT-II and Pegasus was directly supported by the cooperative agreement, and contributions to the NSTX modeling were in support of work by Dr. Bickford Hooper, who was funded through a separate grant. Our primary contribution to model development is the implementation of detailed closure relations for collisional plasma. Postdoctoral associate Adam Bayliss implemented the temperature-dependent effects of Braginskii's parallel collisional ion viscosity. As a graduate student, John O'Bryan added runtime options for Braginskii's models and Ji's K2 models of thermal conduction with magnetization effects and thermal equilibration. As a postdoctoral associate, O'Bryan added the magnetization effects for ion viscosity. Another area of model development completed through the PSI-Center is the implementation of Chodura's phenomenological resistivity model. Finally, we investigated and tested linear electron parallel viscosity, leveraged by support from the Center for Extended Magnetohydrodynamic Modeling (CEMM). Work on algorithmic efficiency improved NIMROD's element-based computations. We reordered arrays and eliminated a level of looping for computations over the data points that are used for numerical integration over elements. Moreover, the reordering allows fewer and larger communication calls when using distributed-memory parallel computation, thereby avoiding a data starvation problem that limited parallel scaling over NIMROD's Fourier components for the periodic coordinate. Together with improved parallel preconditioning, work that was supported by CEMM, these developments allowed NIMROD's first scaling to over 10,000 processor cores. Another algorithm improvement supported by the PSI Center is nonlinear numerical diffusivities for implicit advection. We also developed the Stitch code to enhance the flexibility of NIMROD's preprocessing. Our simulations of HIT-II considered conditions with and without fluctuation-induced amplification of poloidal flux, but our validation efforts focused on conditions without amplification. A significant finding is that NIMROD reproduces the dependence of net plasma current as the imposed poloidal flux is varied. The modeling of Pegasus startup from localized DC injectors predicted that development of a tokamak-like configuration occurs through a sequence of current-filament merger events. Comparison of experimentally measured and numerically computed cross-power spectra enhance confidence in NIMROD's simulation of magnetic fluctuations; however, energy confinement remains an open area for further research. Our contributions to the NSTX study include adaptation of the helicity-injection boundary conditions from the HIT-II simulations and support for linear analysis and computation of 3D current-driven instabilities.« less

  6. Development of a computer model to predict platform station keeping requirements in the Gulf of Mexico using remote sensing data

    NASA Technical Reports Server (NTRS)

    Barber, Bryan; Kahn, Laura; Wong, David

    1990-01-01

    Offshore operations such as oil drilling and radar monitoring require semisubmersible platforms to remain stationary at specific locations in the Gulf of Mexico. Ocean currents, wind, and waves in the Gulf of Mexico tend to move platforms away from their desired locations. A computer model was created to predict the station keeping requirements of a platform. The computer simulation uses remote sensing data from satellites and buoys as input. A background of the project, alternate approaches to the project, and the details of the simulation are presented.

  7. Computational Modeling in Liver Surgery

    PubMed Central

    Christ, Bruno; Dahmen, Uta; Herrmann, Karl-Heinz; König, Matthias; Reichenbach, Jürgen R.; Ricken, Tim; Schleicher, Jana; Ole Schwen, Lars; Vlaic, Sebastian; Waschinsky, Navina

    2017-01-01

    The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery. PMID:29249974

  8. The application of cloud computing to scientific workflows: a study of cost and performance.

    PubMed

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  9. Computational Combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westbrook, C K; Mizobuchi, Y; Poinsot, T J

    2004-08-26

    Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surfacemore » and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.« less

  10. Adaptive Wavelet Modeling of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H.; Dahmen, W.; Vorloeper, J.

    2009-12-01

    Despite the ever-increasing power of modern computers, realistic modeling of complex three-dimensional Earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modeling approaches includes either finite difference or non-adaptive finite element algorithms, and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behavior of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modeled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet based approach that is applicable to a large scope of problems, also including nonlinear problems. To the best of our knowledge such algorithms have not yet been applied in geophysics. Adaptive wavelet algorithms offer several attractive features: (i) for a given subsurface model, they allow the forward modeling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient, and (iii) the modeling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving three-dimensional geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best fit subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectrical modeling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with spatially highly variable electrical conductivities. The linear dependency of the modeling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  11. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  12. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  13. An overview of computer viruses in a research environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1991-01-01

    The threat of attack by computer viruses is in reality a very small part of a much more general threat, specifically threats aimed at subverting computer security. Here, computer viruses are examined as a malicious logic in a research and development environment. A relation is drawn between the viruses and various models of security and integrity. Current research techniques aimed at controlling the threats posed to computer systems by threatening viruses in particular and malicious logic in general are examined. Finally, a brief examination of the vulnerabilities of research and development systems that malicious logic and computer viruses may exploit is undertaken.

  14. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    PubMed

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  15. Wave Propagation in Non-Stationary Statistical Mantle Models at the Global Scale

    NASA Astrophysics Data System (ADS)

    Meschede, M.; Romanowicz, B. A.

    2014-12-01

    We study the effect of statistically distributed heterogeneities that are smaller than the resolution of current tomographic models on seismic waves that propagate through the Earth's mantle at teleseismic distances. Current global tomographic models are missing small-scale structure as evidenced by the failure of even accurate numerical synthetics to explain enhanced coda in observed body and surface waveforms. One way to characterize small scale heterogeneity is to construct random models and confront observed coda waveforms with predictions from these models. Statistical studies of the coda typically rely on models with simplified isotropic and stationary correlation functions in Cartesian geometries. We show how to construct more complex random models for the mantle that can account for arbitrary non-stationary and anisotropic correlation functions as well as for complex geometries. Although this method is computationally heavy, model characteristics such as translational, cylindrical or spherical symmetries can be used to greatly reduce the complexity such that this method becomes practical. With this approach, we can create 3D models of the full spherical Earth that can be radially anisotropic, i.e. with different horizontal and radial correlation functions, and radially non-stationary, i.e. with radially varying model power and correlation functions. Both of these features are crucial for a statistical description of the mantle in which structure depends to first order on the spherical geometry of the Earth. We combine different random model realizations of S velocity with current global tomographic models that are robust at long wavelengths (e.g. Meschede and Romanowicz, 2014, GJI submitted), and compute the effects of these hybrid models on the wavefield with a spectral element code (SPECFEM3D_GLOBE). We finally analyze the resulting coda waves for our model selection and compare our computations with observations. Based on these observations, we make predictions about the strength of unresolved small-scale structure and extrinsic attenuation.

  16. Initial Computations of Vertical Displacement Events with NIMROD

    NASA Astrophysics Data System (ADS)

    Bunkers, Kyle; Sovinec, C. R.

    2014-10-01

    Disruptions associated with vertical displacement events (VDEs) have potential for causing considerable physical damage to ITER and other tokamak experiments. We report on initial computations of generic axisymmetric VDEs using the NIMROD code [Sovinec et al., JCP 195, 355 (2004)]. An implicit thin-wall computation has been implemented to couple separate internal and external regions without numerical stability limitations. A simple rectangular cross-section domain generated with the NIMEQ code [Howell and Sovinec, CPC (2014)] modified to use a symmetry condition at the midplane is used to test linear and nonlinear axisymmetric VDE computation. As current in simulated external coils for large- R / a cases is varied, there is a clear n = 0 stability threshold which lies below the decay-index criterion for the current-loop model of a tokamak to model VDEs [Mukhovatov and Shafranov, Nucl. Fusion 11, 605 (1971)]; a scan of wall distance indicates the offset is due to the influence of the conducting wall. Results with a vacuum region surrounding a resistive wall will also be presented. Initial nonlinear computations show large vertical displacement of an intact simulated tokamak. This effort is supported by U.S. Department of Energy Grant DE-FG02-06ER54850.

  17. Integrating aerodynamic surface modeling for computational fluid dynamics with computer aided structural analysis, design, and manufacturing

    NASA Technical Reports Server (NTRS)

    Thorp, Scott A.

    1992-01-01

    This presentation will discuss the development of a NASA Geometry Exchange Specification for transferring aerodynamic surface geometry between LeRC systems and grid generation software used for computational fluid dynamics research. The proposed specification is based on a subset of the Initial Graphics Exchange Specification (IGES). The presentation will include discussion of how the NASA-IGES standard will accommodate improved computer aided design inspection methods and reverse engineering techniques currently being developed. The presentation is in viewgraph format.

  18. Computer modeling of heat pipe performance

    NASA Technical Reports Server (NTRS)

    Peterson, G. P.

    1983-01-01

    A parametric study of the defining equations which govern the steady state operational characteristics of the Grumman monogroove dual passage heat pipe is presented. These defining equations are combined to develop a mathematical model which describes and predicts the operational and performance capabilities of a specific heat pipe given the necessary physical characteristics and working fluid. Included is a brief review of the current literature, a discussion of the governing equations, and a description of both the mathematical and computer model. Final results of preliminary test runs of the model are presented and compared with experimental tests on actual prototypes.

  19. The importance of vertical resolution in the free troposphere for modeling intercontinental plumes

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiawei; Jacob, Daniel J.; Eastham, Sebastian D.

    2018-05-01

    Chemical plumes in the free troposphere can preserve their identity for more than a week as they are transported on intercontinental scales. Current global models cannot reproduce this transport. The plumes dilute far too rapidly due to numerical diffusion in sheared flow. We show how model accuracy can be limited by either horizontal resolution (Δx) or vertical resolution (Δz). Balancing horizontal and vertical numerical diffusion, and weighing computational cost, implies an optimal grid resolution ratio (Δx / Δz)opt ˜ 1000 for simulating the plumes. This is considerably higher than current global models (Δx / Δz ˜ 20) and explains the rapid plume dilution in the models as caused by insufficient vertical resolution. Plume simulations with the Geophysical Fluid Dynamics Laboratory Finite-Volume Cubed-Sphere Dynamical Core (GFDL-FV3) over a range of horizontal and vertical grid resolutions confirm this limiting behavior. Our highest-resolution simulation (Δx ≈ 25 km, Δz ≈ 80 m) preserves the maximum mixing ratio in the plume to within 35 % after 8 days in strongly sheared flow, a drastic improvement over current models. Adding free tropospheric vertical levels in global models is computationally inexpensive and would also improve the simulation of water vapor.

  20. Estimating wildfire behavior and effects

    Treesearch

    Frank A. Albini

    1976-01-01

    This paper presents a brief survey of the research literature on wildfire behavior and effects and assembles formulae and graphical computation aids based on selected theoretical and empirical models. The uses of mathematical fire behavior models are discussed, and the general capabilities and limitations of currently available models are outlined.

  1. Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.

    2010-08-01

    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  2. Large scale cardiac modeling on the Blue Gene supercomputer.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J

    2008-01-01

    Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.

  3. Visual Computing Environment Workshop

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles (Compiler)

    1998-01-01

    The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.

  4. Multi-temperature state-dependent equivalent circuit discharge model for lithium-sulfur batteries

    NASA Astrophysics Data System (ADS)

    Propp, Karsten; Marinescu, Monica; Auger, Daniel J.; O'Neill, Laura; Fotouhi, Abbas; Somasundaram, Karthik; Offer, Gregory J.; Minton, Geraint; Longo, Stefano; Wild, Mark; Knap, Vaclav

    2016-10-01

    Lithium-sulfur (Li-S) batteries are described extensively in the literature, but existing computational models aimed at scientific understanding are too complex for use in applications such as battery management. Computationally simple models are vital for exploitation. This paper proposes a non-linear state-of-charge dependent Li-S equivalent circuit network (ECN) model for a Li-S cell under discharge. Li-S batteries are fundamentally different to Li-ion batteries, and require chemistry-specific models. A new Li-S model is obtained using a 'behavioural' interpretation of the ECN model; as Li-S exhibits a 'steep' open-circuit voltage (OCV) profile at high states-of-charge, identification methods are designed to take into account OCV changes during current pulses. The prediction-error minimization technique is used. The model is parameterized from laboratory experiments using a mixed-size current pulse profile at four temperatures from 10 °C to 50 °C, giving linearized ECN parameters for a range of states-of-charge, currents and temperatures. These are used to create a nonlinear polynomial-based battery model suitable for use in a battery management system. When the model is used to predict the behaviour of a validation data set representing an automotive NEDC driving cycle, the terminal voltage predictions are judged accurate with a root mean square error of 32 mV.

  5. Influence of computational domain size on the pattern formation of the phase field crystals

    NASA Astrophysics Data System (ADS)

    Starodumov, Ilya; Galenko, Peter; Alexandrov, Dmitri; Kropotin, Nikolai

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) represents one of the important directions of modern computational materials science. This method makes it possible to research the formation of stable or metastable crystal structures. In this paper, we study the effect of computational domain size on the crystal pattern formation obtained as a result of computer simulation by the PFC method. In the current report, we show that if the size of a computational domain is changed, the result of modeling may be a structure in metastable phase instead of pure stable state. The authors present a possible theoretical justification for the observed effect and provide explanations on the possible modification of the PFC method to account for this phenomenon.

  6. Combined Numerical/Analytical Perturbation Solutions of the Navier-Stokes Equations for Aerodynamic Ejector/Mixer Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DeChant, Lawrence Justin

    1998-01-01

    In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.

  7. Prospects for improving the representation of coastal and shelf seas in global ocean models

    NASA Astrophysics Data System (ADS)

    Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard

    2017-02-01

    Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.

  8. Toward an in-situ analytics and diagnostics framework for earth system models

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen

    2017-04-01

    The development roadmaps for many earth system models (ESM) aim for a globally cloud-resolving model targeting the pre-exascale and exascale systems of the future. The ESMs will also incorporate more complex physics, chemistry and biology - thereby vastly increasing the fidelity of the information content simulated by the model. We will then be faced with an unprecedented volume of simulation output that would need to be processed and analyzed concurrently in order to derive the valuable scientific results. We are already at this threshold with our current generation of ESMs at higher resolution simulations. Currently, the nominal I/O throughput in the Community Earth System Model (CESM) via Parallel IO (PIO) library is around 100 MB/s. If we look at the high frequency I/O requirements, it would require an additional 1 GB / simulated hour, translating to roughly 4 mins wallclock / simulated-day => 24.33 wallclock hours / simulated-model-year => 1,752,000 core-hours of charge per simulated-model-year on the Titan supercomputer at the Oak Ridge Leadership Computing Facility. There is also a pending need for 3X more volume of simulation output . Meanwhile, many ESMs use instrument simulators to run forward models to compare model simulations against satellite and ground-based instruments, such as radars and radiometers. The CFMIP Observation Simulator Package (COSP) is used in CESM as well as the Accelerated Climate Model for Energy (ACME), one of the ESMs specifically targeting current and emerging leadership-class computing platforms These simulators can be computationally expensive, accounting for as much as 30% of the computational cost. Hence the data are often written to output files that are then used for offline calculations. Again, the I/O bottleneck becomes a limitation. Detection and attribution studies also use large volume of data for pattern recognition and feature extraction to analyze weather and climate phenomenon such as tropical cyclones, atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.

  9. Perspectives for computational modeling of cell replacement for neurological disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimone, James B.; Weick, Jason P.

    In mathematical modeling of anatomically-constrained neural networks we provide significant insights regarding the response of networks to neurological disorders or injury. Furthermore, a logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impactmore » circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less

  10. Perspectives for computational modeling of cell replacement for neurological disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimone, James B.; Weick, Jason P.

    Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behaviormore » in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less

  11. Validation of Computational Models in Biomechanics

    PubMed Central

    Henninger, Heath B.; Reese, Shawn P.; Anderson, Andrew E.; Weiss, Jeffrey A.

    2010-01-01

    The topics of verification and validation (V&V) have increasingly been discussed in the field of computational biomechanics, and many recent articles have applied these concepts in an attempt to build credibility for models of complex biological systems. V&V are evolving techniques that, if used improperly, can lead to false conclusions about a system under study. In basic science these erroneous conclusions may lead to failure of a subsequent hypothesis, but they can have more profound effects if the model is designed to predict patient outcomes. While several authors have reviewed V&V as they pertain to traditional solid and fluid mechanics, it is the intent of this manuscript to present them in the context of computational biomechanics. Specifically, the task of model validation will be discussed with a focus on current techniques. It is hoped that this review will encourage investigators to engage and adopt the V&V process in an effort to increase peer acceptance of computational biomechanics models. PMID:20839648

  12. Basic study on a lower-energy defibrillation method using computer simulation and cultured myocardial cell models.

    PubMed

    Yaguchi, A; Nagase, K; Ishikawa, M; Iwasaka, T; Odagaki, M; Hosaka, H

    2006-01-01

    Computer simulation and myocardial cell models were used to evaluate a low-energy defibrillation technique. A generated spiral wave, considered to be a mechanism of fibrillation, and fibrillation were investigated using two myocardial sheet models: a two-dimensional computer simulation model and a two-dimensional experimental model. A new defibrillation technique that has few side effects, which are induced by the current passing into the patient's body, on cardiac muscle is desired. The purpose of the present study is to conduct a basic investigation into an efficient defibrillation method. In order to evaluate the defibrillation method, the propagation of excitation in the myocardial sheet is measured during the normal state and during fibrillation, respectively. The advantages of the low-energy defibrillation technique are then discussed based on the stimulation timing.

  13. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Lytle, John K. (Technical Monitor)

    2002-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.

  14. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  15. The Development of a Dynamic Geomagnetic Cutoff Rigidity Model for the International Space Station

    NASA Technical Reports Server (NTRS)

    Smart, D. F.; Shea, M. A.

    1999-01-01

    We have developed a computer model of geomagnetic vertical cutoffs applicable to the orbit of the International Space Station. This model accounts for the change in geomagnetic cutoff rigidity as a function of geomagnetic activity level. This model was delivered to NASA Johnson Space Center in July 1999 and tested on the Space Radiation Analysis Group DEC-Alpha computer system to ensure that it will properly interface with other software currently used at NASA JSC. The software was designed for ease of being upgraded as other improved models of geomagnetic cutoff as a function of magnetic activity are developed.

  16. Computer Technology-Integrated Projects Should Not Supplant Craft Projects in Science Education

    ERIC Educational Resources Information Center

    Klopp, Tabatha J.; Rule, Audrey C.; Schneider, Jean Suchsland; Boody, Robert M.

    2014-01-01

    The current emphasis on computer technology integration and narrowing of the curriculum has displaced arts and crafts. However, the hands-on, concrete nature of craft work in science modeling enables students to understand difficult concepts and to be engaged and motivated while learning spatial, logical, and sequential thinking skills. Analogy…

  17. Computer Simulation Modeling: A Method for Predicting the Utilities of Alternative Computer-Aided Treat Evaluation Algorithms

    DTIC Science & Technology

    1990-09-01

    1988). Current versions of the ADATS have CATE systems insLzlled, but the software is still under development by the radar manufacturer, Contraves ...Italiana, a subcontractor to Martin Marietta (USA). Contraves Italiana will deliver the final version of the software to Martin Marietta in 1991. Until then

  18. International Futures (IFs): A Global Issues Simulation for Teaching and Research.

    ERIC Educational Resources Information Center

    Hughes, Barry B.

    This paper describes the International Futures (IFs) computer assisted simulation game for use with undergraduates. Written in Standard Fortran IV, the model currently runs on mainframe or mini computers, but has not been adapted for micros. It has been successfully installed on Harris, Burroughs, Telefunken, CDC, Univac, IBM, and Prime machines.…

  19. Creation and Development of an Integrated Model of New Technologies and ESP

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus

    2004-01-01

    It seems irrefutable that the world is progressing in concert with computer science. Educational applications and projects for first and second language acquisition have not been left behind. However, currently it seems that the reputation of completely computer-based language learning courses has taken a nosedive, and, consequently there has been…

  20. Computer Software: Does It Support a New View of Reading?

    ERIC Educational Resources Information Center

    Case, Carolyn J.

    A study examined commercially available computer software to ascertain its degree of congruency with current methods of reading instruction (the Interactive model) at the first and second grade levels. A survey was conducted of public school educators in Connecticut and experts in the field to determine their level of satisfaction with available…

  1. Associations among Teachers' Attitudes towards Computer-Assisted Education and TPACK Competencies

    ERIC Educational Resources Information Center

    Baturay, Meltem Huri; Gökçearslan, Sahin; Sahin, Semsettin

    2017-01-01

    The current study investigates the attitudes of teachers towards Computer-Assisted Education (CAE) and their knowledge of technology, pedagogy and content via TPACK model that assesses the competencies for developing and implementing successful teaching. There were 280 participants in the study. The results of the study indicate that teachers'…

  2. Progress on the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha

    2015-12-01

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.

  3. Using Cellular Automata for Parking Recommendations in Smart Environments

    PubMed Central

    Horng, Gwo-Jiun

    2014-01-01

    In this work, we propose an innovative adaptive recommendation mechanism for smart parking. The cognitive RF module will transmit the vehicle location information and the parking space requirements to the parking congestion computing center (PCCC) when the driver must find a parking space. Moreover, for the parking spaces, we use a cellular automata (CA) model mechanism that can adjust to full and not full parking lot situations. Here, the PCCC can compute the nearest parking lot, the parking lot status and the current or opposite driving direction with the vehicle location information. By considering the driving direction, we can determine when the vehicles must turn around and thus reduce road congestion and speed up finding a parking space. The recommendation will be sent to the drivers through a wireless communication cognitive radio (CR) model after the computation and analysis by the PCCC. The current study evaluates the performance of this approach by conducting computer simulations. The simulation results show the strengths of the proposed smart parking mechanism in terms of avoiding increased congestion and decreasing the time to find a parking space. PMID:25153671

  4. Thrombosis in Cerebral Aneurysms and the Computational Modeling Thereof: A Review

    PubMed Central

    Ngoepe, Malebogo N.; Frangi, Alejandro F.; Byrne, James V.; Ventikos, Yiannis

    2018-01-01

    Thrombosis is a condition closely related to cerebral aneurysms and controlled thrombosis is the main purpose of endovascular embolization treatment. The mechanisms governing thrombus initiation and evolution in cerebral aneurysms have not been fully elucidated and this presents challenges for interventional planning. Significant effort has been directed towards developing computational methods aimed at streamlining the interventional planning process for unruptured cerebral aneurysm treatment. Included in these methods are computational models of thrombus development following endovascular device placement. The main challenge with developing computational models for thrombosis in disease cases is that there exists a wide body of literature that addresses various aspects of the clotting process, but it may not be obvious what information is of direct consequence for what modeling purpose (e.g., for understanding the effect of endovascular therapies). The aim of this review is to present the information so it will be of benefit to the community attempting to model cerebral aneurysm thrombosis for interventional planning purposes, in a simplified yet appropriate manner. The paper begins by explaining current understanding of physiological coagulation and highlights the documented distinctions between the physiological process and cerebral aneurysm thrombosis. Clinical observations of thrombosis following endovascular device placement are then presented. This is followed by a section detailing the demands placed on computational models developed for interventional planning. Finally, existing computational models of thrombosis are presented. This last section begins with description and discussion of physiological computational clotting models, as they are of immense value in understanding how to construct a general computational model of clotting. This is then followed by a review of computational models of clotting in cerebral aneurysms, specifically. Even though some progress has been made towards computational predictions of thrombosis following device placement in cerebral aneurysms, many gaps still remain. Answering the key questions will require the combined efforts of the clinical, experimental and computational communities. PMID:29670533

  5. Thrombosis in Cerebral Aneurysms and the Computational Modeling Thereof: A Review.

    PubMed

    Ngoepe, Malebogo N; Frangi, Alejandro F; Byrne, James V; Ventikos, Yiannis

    2018-01-01

    Thrombosis is a condition closely related to cerebral aneurysms and controlled thrombosis is the main purpose of endovascular embolization treatment. The mechanisms governing thrombus initiation and evolution in cerebral aneurysms have not been fully elucidated and this presents challenges for interventional planning. Significant effort has been directed towards developing computational methods aimed at streamlining the interventional planning process for unruptured cerebral aneurysm treatment. Included in these methods are computational models of thrombus development following endovascular device placement. The main challenge with developing computational models for thrombosis in disease cases is that there exists a wide body of literature that addresses various aspects of the clotting process, but it may not be obvious what information is of direct consequence for what modeling purpose (e.g., for understanding the effect of endovascular therapies). The aim of this review is to present the information so it will be of benefit to the community attempting to model cerebral aneurysm thrombosis for interventional planning purposes, in a simplified yet appropriate manner. The paper begins by explaining current understanding of physiological coagulation and highlights the documented distinctions between the physiological process and cerebral aneurysm thrombosis. Clinical observations of thrombosis following endovascular device placement are then presented. This is followed by a section detailing the demands placed on computational models developed for interventional planning. Finally, existing computational models of thrombosis are presented. This last section begins with description and discussion of physiological computational clotting models, as they are of immense value in understanding how to construct a general computational model of clotting. This is then followed by a review of computational models of clotting in cerebral aneurysms, specifically. Even though some progress has been made towards computational predictions of thrombosis following device placement in cerebral aneurysms, many gaps still remain. Answering the key questions will require the combined efforts of the clinical, experimental and computational communities.

  6. GeoBrain Computational Cyber-laboratory for Earth Science Studies

    NASA Astrophysics Data System (ADS)

    Deng, M.; di, L.

    2009-12-01

    Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.

  7. Scaling a Human Body Finite Element Model with Radial Basis Function Interpolation

    DTIC Science & Technology

    Human body models are currently used to evaluate the body’s response to a variety of threats to the Soldier. The ability to adjust the size of human...body models is currently limited because of the complex shape changes that are required. Here, a radial basis function interpolation method is used to...morph the shape on an existing finite element mesh. Tools are developed and integrated into the Blender computer graphics software to assist with

  8. Eddy current modeling in linear and nonlinear multifilamentary composite materials

    NASA Astrophysics Data System (ADS)

    Menana, Hocine; Farhat, Mohamad; Hinaje, Melika; Berger, Kevin; Douine, Bruno; Lévêque, Jean

    2018-04-01

    In this work, a numerical model is developed for a rapid computation of eddy currents in composite materials, adaptable for both carbon fiber reinforced polymers (CFRPs) for NDT applications and multifilamentary high temperature superconductive (HTS) tapes for AC loss evaluation. The proposed model is based on an integro-differential formulation in terms of the electric vector potential in the frequency domain. The high anisotropy and the nonlinearity of the considered materials are easily handled in the frequency domain.

  9. Applications of computational fluid dynamics (CFD) in the modelling and design of ventilation systems in the agricultural industry: a review.

    PubMed

    Norton, Tomás; Sun, Da-Wen; Grant, Jim; Fallon, Richard; Dodd, Vincent

    2007-09-01

    The application of computational fluid dynamics (CFD) in the agricultural industry is becoming ever more important. Over the years, the versatility, accuracy and user-friendliness offered by CFD has led to its increased take-up by the agricultural engineering community. Now CFD is regularly employed to solve environmental problems of greenhouses and animal production facilities. However, due to a combination of increased computer efficacy and advanced numerical techniques, the realism of these simulations has only been enhanced in recent years. This study provides a state-of-the-art review of CFD, its current applications in the design of ventilation systems for agricultural production systems, and the outstanding challenging issues that confront CFD modellers. The current status of greenhouse CFD modelling was found to be at a higher standard than that of animal housing, owing to the incorporation of user-defined routines that simulate crop biological responses as a function of local environmental conditions. Nevertheless, the most recent animal housing simulations have addressed this issue and in turn have become more physically realistic.

  10. Computer Analysis of Spectrum Anomaly in 32-GHz Traveling-Wave Tube for Cassini Mission

    NASA Technical Reports Server (NTRS)

    Dayton, James A., Jr.; Wilson, Jeffrey D.; Kory, Carol L.

    1999-01-01

    Computer modeling of the 32-GHz traveling-wave tube (TWT) for the Cassini Mission was conducted to explain the anomaly observed in the spectrum analysis of one of the flight-model tubes. The analysis indicated that the effect, manifested as a weak signal in the neighborhood of 35 GHz, was an intermodulation product of the 32-GHz drive signal with a 66.9-GHz oscillation induced by coupling to the second harmonic'signal. The oscillation occurred only at low- radiofrequency (RF) drive power levels that are not expected during the Cassini Mission. The conclusion was that the anomaly was caused by a generic defect inadvertently incorporated in the geometric design of the slow-wave circuit and that it would not change as the TWT aged. The most probable effect of aging on tube performance would be a reduction in the electron beam current. The computer modeling indicated that although not likely to occur within the mission lifetime, a reduction in beam current would reduce or eliminate the anomaly but would do so at the cost of reduced RF output power.

  11. Goal-Directed Behavior and Instrumental Devaluation: A Neural System-Level Computational Model

    PubMed Central

    Mannella, Francesco; Mirolli, Marco; Baldassarre, Gianluca

    2016-01-01

    Devaluation is the key experimental paradigm used to demonstrate the presence of instrumental behaviors guided by goals in mammals. We propose a neural system-level computational model to address the question of which brain mechanisms allow the current value of rewards to control instrumental actions. The model pivots on and shows the computational soundness of the hypothesis for which the internal representation of instrumental manipulanda (e.g., levers) activate the representation of rewards (or “action-outcomes”, e.g., foods) while attributing to them a value which depends on the current internal state of the animal (e.g., satiation for some but not all foods). The model also proposes an initial hypothesis of the integrated system of key brain components supporting this process and allowing the recalled outcomes to bias action selection: (a) the sub-system formed by the basolateral amygdala and insular cortex acquiring the manipulanda-outcomes associations and attributing the current value to the outcomes; (b) three basal ganglia-cortical loops selecting respectively goals, associative sensory representations, and actions; (c) the cortico-cortical and striato-nigro-striatal neural pathways supporting the selection, and selection learning, of actions based on habits and goals. The model reproduces and explains the results of several devaluation experiments carried out with control rats and rats with pre- and post-training lesions of the basolateral amygdala, the nucleus accumbens core, the prelimbic cortex, and the dorso-medial striatum. The results support the soundness of the hypotheses of the model and show its capacity to integrate, at the system-level, the operations of the key brain structures underlying devaluation. Based on its hypotheses and predictions, the model also represents an operational framework to support the design and analysis of new experiments on the motivational aspects of goal-directed behavior. PMID:27803652

  12. Three-dimensional computational fluid dynamics modelling and experimental validation of the Jülich Mark-F solid oxide fuel cell stack

    NASA Astrophysics Data System (ADS)

    Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.

    2018-01-01

    This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.

  13. Adaptive Testing without IRT.

    ERIC Educational Resources Information Center

    Yan, Duanli; Lewis, Charles; Stocking, Martha

    It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all new and currently considered computer-based tests. In addition to developing new models, researchers will need to give some attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized…

  14. Examination of the low frequency limit for helicopter noise data in the Federal Aviation Administration's Aviation Environmental Design Tool and Integrated Noise Model

    DOT National Transportation Integrated Search

    2010-04-19

    The Federal Aviation Administration (FAA) aircraft noise modeling tools Aviation Environmental Design Tool (AEDTc) and Integrated Noise Model (INM) do not currently consider noise below 50 Hz in their computations. This paper describes a preliminary ...

  15. System for assessing Aviation's Global Emissions (SAGE), part 1 : model description and inventory results

    DOT National Transportation Integrated Search

    2007-07-01

    In early 2001, the US Federal Aviation Administration embarked on a multi-year effort to develop a new computer model, the System for assessing Aviation's Global Emissions (SAGE). Currently at Version 1.5, the basic use of the model has centered on t...

  16. Eddy Viscosity for Variable Density Coflowing Streams,

    DTIC Science & Technology

    EDDY CURRENTS, *JET MIXING FLOW, *VISCOSITY, *AIR FLOW, MATHEMATICAL MODELS, INCOMPRESSIBLE FLOW, AXISYMMETRIC FLOW, MATHEMATICAL PREDICTION, THRUST AUGMENTATION , EJECTORS , COMPUTER PROGRAMMING, SECONDARY FLOW, DENSITY, MODIFICATION.

  17. User assessment of smoke-dispersion models for wildland biomass burning.

    Treesearch

    Steve Breyfogle; Sue A. Ferguson

    1996-01-01

    Several smoke-dispersion models, which currently are available for modeling smoke from biomass burns, were evaluated for ease of use, availability of input data, and output data format. The input and output components of all models are listed, and differences in model physics are discussed. Each model was installed and run on a personal computer with a simple-case...

  18. Off-Gas Adsorption Model Capabilities and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyon, Kevin L.; Welty, Amy K.; Law, Jack

    2016-03-01

    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capturemore » the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently available. Thus, in order to improve the predictive capabilities of the model, there is a need for more single-species adsorption isotherms at different temperatures, in addition to extending the model to include adsorption kinetics. This report provides background information about the modeling process and a path forward for further model improvement in terms of accuracy and user interface.« less

  19. Instrumentation and telemetry systems for free-flight drop model testing

    NASA Technical Reports Server (NTRS)

    Hyde, Charles R.; Massie, Jeffrey J.

    1993-01-01

    This paper presents instrumentation and telemetry system techniques used in free-flight research drop model testing at the NASA Langley Research Center. The free-flight drop model test technique is used to conduct flight dynamics research of high performance aircraft using dynamically scaled models. The free-flight drop model flight testing supplements research using computer analysis and wind tunnel testing. The drop models are scaled to approximately 20 percent of the size of the actual aircraft. This paper presents an introduction to the Free-Flight Drop Model Program which is followed by a description of the current instrumentation and telemetry systems used at the NASA Langley Research Center, Plum Tree Test Site. The paper describes three telemetry downlinks used to acquire the data, video, and radar tracking information from the model. Also described are two telemetry uplinks, one used to fly the model employing a ground-based flight control computer and a second to activate commands for visual tracking and parachute recovery of the model. The paper concludes with a discussion of free-flight drop model instrumentation and telemetry system development currently in progress for future drop model projects at the NASA Langley Research Center.

  20. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  1. Computational Fluid Dynamics of Choanoflagellate Filter-Feeding

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Seyed Saeed; Walther, Jens; Nielsen, Lasse Tore; Kiorboe, Thomas; Dolger, Julia; Andersen, Anders

    2017-11-01

    Choanoflagellates are unicellular aquatic organisms with a single flagellum that drives a feeding current through a funnel-shaped collar filter on which bacteria-sized prey are caught. Using computational fluid dynamics (CFD) we model the beating flagellum and the complex filter flow of the choanoflagellate Diaphanoeca grandis. Our CFD simulations based on the current understanding of the morphology underestimate the experimentally observed clearance rate by more than an order of magnitude: The beating flagellum is simply unable to draw enough water through the fine filter. Our observations motivate us to suggest a radically different filtration mechanism that requires a flagellar vane (sheet), and addition of a wide vane in our CFD model allows us to correctly predict the observed clearance rate.

  2. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    PubMed

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  3. A Computational Model of Reasoning from the Clinical Literature

    PubMed Central

    Rennels, Glenn D.

    1986-01-01

    This paper explores the premise that a formalized representation of empirical studies can play a central role in computer-based decision support. The specific motivations underlying this research include the following propositions: 1. Reasoning from experimental evidence contained in the clinical literature is central to the decisions physicians make in patient care. 2. A computational model, based upon a declarative representation for published reports of clinical studies, can drive a computer program that selectively tailors knowledge of the clinical literature as it is applied to a particular case. 3. The development of such a computational model is an important first step toward filling a void in computer-based decision support systems. Furthermore, the model may help us better understand the general principles of reasoning from experimental evidence both in medicine and other domains. Roundsman is a developmental computer system which draws upon structured representations of the clinical literature in order to critique plans for the management of primary breast cancer. Roundsman is able to produce patient-specific analyses of breast cancer management options based on the 24 clinical studies currently encoded in its knowledge base. The Roundsman system is a first step in exploring how the computer can help to bring a critical analysis of the relevant literature to the physician, structured around a particular patient and treatment decision.

  4. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  5. Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele

    2014-11-01

    Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less

  6. Transcranial direct current stimulation in obsessive-compulsive disorder: emerging clinical evidence and considerations for optimal montage of electrodes.

    PubMed

    Senço, Natasha M; Huang, Yu; D'Urso, Giordano; Parra, Lucas C; Bikson, Marom; Mantovani, Antonio; Shavitt, Roseli G; Hoexter, Marcelo Q; Miguel, Eurípedes C; Brunoni, André R

    2015-07-01

    Neuromodulation techniques for obsessive-compulsive disorder (OCD) treatment have expanded with greater understanding of the brain circuits involved. Transcranial direct current stimulation (tDCS) might be a potential new treatment for OCD, although the optimal montage is unclear. To perform a systematic review on meta-analyses of repetitive transcranianal magnetic stimulation (rTMS) and deep brain stimulation (DBS) trials for OCD, aiming to identify brain stimulation targets for future tDCS trials and to support the empirical evidence with computer head modeling analysis. Systematic reviews of rTMS and DBS trials on OCD in Pubmed/MEDLINE were searched. For the tDCS computational analysis, we employed head models with the goal of optimally targeting current delivery to structures of interest. Only three references matched our eligibility criteria. We simulated four different electrodes montages and analyzed current direction and intensity. Although DBS, rTMS and tDCS are not directly comparable and our theoretical model, based on DBS and rTMS targets, needs empirical validation, we found that the tDCS montage with the cathode over the pre-supplementary motor area and extra-cephalic anode seems to activate most of the areas related to OCD.

  7. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    PubMed Central

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  8. Data needs for X-ray astronomy satellites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, T.

    I review the current status of atomic data for X-ray astronomy satellites. This includes some of the astrophysical issues which can be addressed, current modeling and analysis techniques, computational tools, the limitations imposed by currently available atomic data, and the validity of standard assumptions. I also discuss the future: challenges associated with future missions and goals for atomic data collection.

  9. Computational Aerothermodynamic Design Issues for Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Weilmuenster, K. James; Hamilton, H. Harris, II; Olynick, David R.; Venkatapathy, Ethiraj

    1997-01-01

    A brief review of the evolutionary progress in computational aerothermodynamics is presented. The current status of computational aerothermodynamics is then discussed, with emphasis on its capabilities and limitations for contributions to the design process of hypersonic vehicles. Some topics to be highlighted include: (1) aerodynamic coefficient predictions with emphasis on high temperature gas effects; (2) surface heating and temperature predictions for thermal protection system (TPS) design in a high temperature, thermochemical nonequilibrium environment; (3) methods for extracting and extending computational fluid dynamic (CFD) solutions for efficient utilization by all members of a multidisciplinary design team; (4) physical models; (5) validation process and error estimation; and (6) gridding and solution generation strategies. Recent experiences in the design of X-33 will be featured. Computational aerothermodynamic contributions to Mars Pathfinder, METEOR, and Stardust (Comet Sample return) will also provide context for this discussion. Some of the barriers that currently limit computational aerothermodynamics to a predominantly reactive mode in the design process will also be discussed, with the goal of providing focus for future research.

  10. Computational Aerothermodynamic Design Issues for Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Weilmuenster, K. James; Hamilton, H. Harris, II; Olynick, David R.; Venkatapathy, Ethiraj

    2005-01-01

    A brief review of the evolutionary progress in computational aerothermodynamics is presented. The current status of computational aerothermodynamics is then discussed, with emphasis on its capabilities and limitations for contributions to the design process of hypersonic vehicles. Some topics to be highlighted include: (1) aerodynamic coefficient predictions with emphasis on high temperature gas effects; (2) surface heating and temperature predictions for thermal protection system (TPS) design in a high temperature, thermochemical nonequilibrium environment; (3) methods for extracting and extending computational fluid dynamic (CFD) solutions for efficient utilization by all members of a multidisciplinary design team; (4) physical models; (5) validation process and error estimation; and (6) gridding and solution generation strategies. Recent experiences in the design of X-33 will be featured. Computational aerothermodynamic contributions to Mars Path finder, METEOR, and Stardust (Comet Sample return) will also provide context for this discussion. Some of the barriers that currently limit computational aerothermodynamics to a predominantly reactive mode in the design process will also be discussed, with the goal of providing focus for future research.

  11. Computational Aerothermodynamic Design Issues for Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Olynick, David R.; Venkatapathy, Ethiraj

    2004-01-01

    A brief review of the evolutionary progress in computational aerothermodynamics is presented. The current status of computational aerothermodynamics is then discussed, with emphasis on its capabilities and limitations for contributions to the design process of hypersonic vehicles. Some topics to be highlighted include: (1) aerodynamic coefficient predictions with emphasis on high temperature gas effects; (2) surface heating and temperature predictions for thermal protection system (TPS) design in a high temperature, thermochemical nonequilibrium environment; (3) methods for extracting and extending computational fluid dynamic (CFD) solutions for efficient utilization by all members of a multidisciplinary design team; (4) physical models; (5) validation process and error estimation; and (6) gridding and solution generation strategies. Recent experiences in the design of X-33 will be featured. Computational aerothermodynamic contributions to Mars Pathfinder, METEOR, and Stardust (Comet Sample return) will also provide context for this discussion. Some of the barriers that currently limit computational aerothermodynamics to a predominantly reactive mode in the design process will also be discussed, with the goal of providing focus for future research.

  12. Boolean and brain-inspired computing using spin-transfer torque devices

    NASA Astrophysics Data System (ADS)

    Fan, Deliang

    Several completely new approaches (such as spintronic, carbon nanotube, graphene, TFETs, etc.) to information processing and data storage technologies are emerging to address the time frame beyond current Complementary Metal-Oxide-Semiconductor (CMOS) roadmap. The high speed magnetization switching of a nano-magnet due to current induced spin-transfer torque (STT) have been demonstrated in recent experiments. Such STT devices can be explored in compact, low power memory and logic design. In order to truly leverage STT devices based computing, researchers require a re-think of circuit, architecture, and computing model, since the STT devices are unlikely to be drop-in replacements for CMOS. The potential of STT devices based computing will be best realized by considering new computing models that are inherently suited to the characteristics of STT devices, and new applications that are enabled by their unique capabilities, thereby attaining performance that CMOS cannot achieve. The goal of this research is to conduct synergistic exploration in architecture, circuit and device levels for Boolean and brain-inspired computing using nanoscale STT devices. Specifically, we first show that the non-volatile STT devices can be used in designing configurable Boolean logic blocks. We propose a spin-memristor threshold logic (SMTL) gate design, where memristive cross-bar array is used to perform current mode summation of binary inputs and the low power current mode spintronic threshold device carries out the energy efficient threshold operation. Next, for brain-inspired computing, we have exploited different spin-transfer torque device structures that can implement the hard-limiting and soft-limiting artificial neuron transfer functions respectively. We apply such STT based neuron (or 'spin-neuron') in various neural network architectures, such as hierarchical temporal memory and feed-forward neural network, for performing "human-like" cognitive computing, which show more than two orders of lower energy consumption compared to state of the art CMOS implementation. Finally, we show the dynamics of injection locked Spin Hall Effect Spin-Torque Oscillator (SHE-STO) cluster can be exploited as a robust multi-dimensional distance metric for associative computing, image/ video analysis, etc. Our simulation results show that the proposed system architecture with injection locked SHE-STOs and the associated CMOS interface circuits can be suitable for robust and energy efficient associative computing and pattern matching.

  13. Measurement of Three-dimensional Density Distributions by Holographic Interferometry and Computer Tomography

    NASA Technical Reports Server (NTRS)

    Vest, C. M.

    1982-01-01

    The use of holographic interferometry to measure two and threedimensional flows and the interpretation of multiple-view interferograms with computer tomography are discussed. Computational techniques developed for tomography are reviewed. Current research topics are outlined including the development of an automated fringe readout system, optimum reconstruction procedures for when an opaque test model is present in the field, and interferometry and tomography with strongly refracting fields and shocks.

  14. ORA User’s Guide 2007

    DTIC Science & Technology

    2007-07-01

    July 2007 CMU-ISRI-07-115 Institute for Software Research School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213...ORA uses a Java interface for ease of use, and a C++ computational backend. The current version ORA1.2 software is available on the CASOS website...06-1-0104, N00014-06-1-0921, the AFOSR for “ Computational Modeling of Cultural Dimensions in Adversary Organization (MURI)”, the ARL for Assessing C2

  15. FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  16. Computational dosimetry for grounded and ungrounded human models due to contact current

    NASA Astrophysics Data System (ADS)

    Chan, Kwok Hung; Hattori, Junya; Laakso, Ilkka; Hirata, Akimasa; Taki, Masao

    2013-08-01

    This study presents the computational dosimetry of contact currents for grounded and ungrounded human models. The uncertainty of the quasi-static (QS) approximation of the in situ electric field induced in a grounded/ungrounded human body due to the contact current is first estimated. Different scenarios of cylindrical and anatomical human body models are considered, and the results are compared with the full-wave analysis. In the QS analysis, the induced field in the grounded cylindrical model is calculated by the QS finite-difference time-domain (QS-FDTD) method, and compared with the analytical solution. Because no analytical solution is available for the grounded/ungrounded anatomical human body model, the results of the QS-FDTD method are then compared with those of the conventional FDTD method. The upper frequency limit for the QS approximation in the contact current dosimetry is found to be 3 MHz, with a relative local error of less than 10%. The error increases above this frequency, which can be attributed to the neglect of the displacement current. The QS or conventional FDTD method is used for the dosimetry of induced electric field and/or specific absorption rate (SAR) for a contact current injected into the index finger of a human body model in the frequency range from 10 Hz to 100 MHz. The in situ electric fields or SAR are compared with the basic restrictions in the international guidelines/standards. The maximum electric field or the 99th percentile value of the electric fields appear not only in the fat and muscle tissues of the finger, but also around the wrist, forearm, and the upper arm. Some discrepancies are observed between the basic restrictions for the electric field and SAR and the reference levels for the contact current, especially in the extremities. These discrepancies are shown by an equation that relates the current density, tissue conductivity, and induced electric field in the finger with a cross-sectional area of 1 cm2.

  17. Comparative simulation of switching regimes of magnetic explosion generators by copper and aluminum magnetodynamic current breakers taking into account elastoplastic properties of materials

    NASA Astrophysics Data System (ADS)

    Bazanov, A. A.; Ivanovskii, A. V.; Panov, A. I.; Samodolov, A. V.; Sokolov, S. S.; Shaidullin, V. Sh.

    2017-06-01

    We report on the results of the computer simulation of the operation of magnetodynamic break switches used as the second stage of current pulse formation in magnetic explosion generators. The simulation was carried out under the conditions when the magnetic field energy density on the surface of the switching conductor as a function of the current through it was close to but still did not exceed the critical value typical of the beginning of electric explosion. In the computational model, we used the parameters of experimentally tested sample of a coil magnetic explosion generator that can store energy of up to 2.7 MJ in the inductive storage circuit and equipped with a primary explosion stage of the current pulse formation. It has been shown that the choice of the switching conductor material, as well as its elastoplastic properties, considerably affects the breaker speed. Comparative results of computer simulation for copper and aluminum have been considered.

  18. Magnetic force microscopy/current contrast imaging: A new technique for internal current probing of ICs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, A.N.; Cole, E.I. Jr.; Dodd, B.A.

    This invited paper describes recently reported work on the application of magnetic force microscopy (MFM) to image currents in IC conductors [1]. A computer model for MFM imaging of IC currents and experimental results demonstrating the ability to determine current direction and magnitude with a resolution of {approximately} 1 mA dc and {approximately} 1 {mu}A ac are presented. The physics of MFM signal generation and applications to current imaging and measurement are described.

  19. Robust tuning of robot control systems

    NASA Technical Reports Server (NTRS)

    Minis, I.; Uebel, M.

    1992-01-01

    The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.

  20. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  1. CAPTIONALS: A computer aided testing environment for the verification and validation of communication protocols

    NASA Technical Reports Server (NTRS)

    Feng, C.; Sun, X.; Shen, Y. N.; Lombardi, Fabrizio

    1992-01-01

    This paper covers the verification and protocol validation for distributed computer and communication systems using a computer aided testing approach. Validation and verification make up the so-called process of conformance testing. Protocol applications which pass conformance testing are then checked to see whether they can operate together. This is referred to as interoperability testing. A new comprehensive approach to protocol testing is presented which address: (1) modeling for inter-layer representation for compatibility between conformance and interoperability testing; (2) computational improvement to current testing methods by using the proposed model inclusive of formulation of new qualitative and quantitative measures and time-dependent behavior; (3) analysis and evaluation of protocol behavior for interactive testing without extensive simulation.

  2. Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Alruwaili, Manal

    With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.

  3. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    NASA Astrophysics Data System (ADS)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use of portfolio views and top 'n' analysis. This proved the usefulness of the RAAM framework and methodology during Pre-Milestone A capability based analysis. (Abstract shortened by UMI.).

  4. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  5. Megawatt Electromagnetic Plasma Propulsion

    NASA Technical Reports Server (NTRS)

    Gilland, James; Lapointe, Michael; Mikellides, Pavlos

    2003-01-01

    The NASA Glenn Research Center program in megawatt level electric propulsion is centered on electromagnetic acceleration of quasi-neutral plasmas. Specific concepts currently being examined are the Magnetoplasmadynamic (MPD) thruster and the Pulsed Inductive Thruster (PIT). In the case of the MPD thruster, a multifaceted approach of experiments, computational modeling, and systems-level models of self field MPD thrusters is underway. The MPD thruster experimental research consists of a 1-10 MWe, 2 ms pulse-forming-network, a vacuum chamber with two 32 diffusion pumps, and voltage, current, mass flow rate, and thrust stand diagnostics. Current focus is on obtaining repeatable thrust measurements of a Princeton Benchmark type self field thruster operating at 0.5-1 gls of argon. Operation with hydrogen is the ultimate goal to realize the increased efficiency anticipated using the lighter gas. Computational modeling is done using the MACH2 MHD code, which can include real gas effects for propellants of interest to MPD operation. The MACH2 code has been benchmarked against other MPD thruster data, and has been used to create a point design for a 3000 second specific impulse (Isp) MPD thruster. This design is awaiting testing in the experimental facility. For the PIT, a computational investigation using MACH2 has been initiated, with experiments awaiting further funding. Although the calculated results have been found to be sensitive to the initial ionization assumptions, recent results have agreed well with experimental data. Finally, a systems level self-field MPD thruster model has been developed that allows for a mission planner or system designer to input Isp and power level into the model equations and obtain values for efficiency, mass flow rate, and input current and voltage. This model emphasizes algebraic simplicity to allow its incorporation into larger trajectory or system optimization codes. The systems level approach will be extended to the pulsed inductive thruster and other electrodeless thrusters at a future date.

  6. Fundamentals, current state of the development of, and prospects for further improvement of the new-generation thermal-hydraulic computational HYDRA-IBRAE/LM code for simulation of fast reactor systems

    NASA Astrophysics Data System (ADS)

    Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.

    2016-02-01

    The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.

  7. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  8. A Machine LearningFramework to Forecast Wave Conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; James, S. C.; O'Donncha, F.

    2017-12-01

    Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.

  9. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  10. The interplay of seven subthreshold conductances controls the resting membrane potential and the oscillatory behavior of thalamocortical neurons

    PubMed Central

    Zagha, Edward; Mato, German; Rudy, Bernardo; Nadal, Marcela S.

    2014-01-01

    The signaling properties of thalamocortical (TC) neurons depend on the diversity of ion conductance mechanisms that underlie their rich membrane behavior at subthreshold potentials. Using patch-clamp recordings of TC neurons in brain slices from mice and a realistic conductance-based computational model, we characterized seven subthreshold ion currents of TC neurons and quantified their individual contributions to the total steady-state conductance at levels below tonic firing threshold. We then used the TC neuron model to show that the resting membrane potential results from the interplay of several inward and outward currents over a background provided by the potassium and sodium leak currents. The steady-state conductances of depolarizing Ih (hyperpolarization-activated cationic current), IT (low-threshold calcium current), and INaP (persistent sodium current) move the membrane potential away from the reversal potential of the leak conductances. This depolarization is counteracted in turn by the hyperpolarizing steady-state current of IA (fast transient A-type potassium current) and IKir (inwardly rectifying potassium current). Using the computational model, we have shown that single parameter variations compatible with physiological or pathological modulation promote burst firing periodicity. The balance between three amplifying variables (activation of IT, activation of INaP, and activation of IKir) and three recovering variables (inactivation of IT, activation of IA, and activation of Ih) determines the propensity, or lack thereof, of repetitive burst firing of TC neurons. We also have determined the specific roles that each of these variables have during the intrinsic oscillation. PMID:24760784

  11. Shuttle cryogenics supply system optimization study. Volume 5, B-3, part 2: Appendix to programmers manual for math model

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A computer programmer's manual for a digital computer which will permit rapid and accurate parametric analysis of current and advanced attitude control propulsion systems is presented. The concept is for a cold helium pressurized, subcritical cryogen fluid supplied, bipropellant gas-fed attitude control propulsion system. The cryogen fluids are stored as liquids under low pressure and temperature conditions. The mathematical model provides a generalized form for the procedural technique employed in setting up the analysis program.

  12. Dayton Aircraft Cabin Fire Model, Version 3, Volume I. Physical Description.

    DTIC Science & Technology

    1982-06-01

    contact to any surface directly above a burning element, provided that the current flame length makes contact possible. For fires originating on the...no extension of the flames horizontally beneath the surface is considered. The equation for computing the flame length is presented in Section 5. For...high as 0.3. The values chosen for DACFIR3 are 0.15 for Ec and 0.10 for E P. The Steward model is also used to compute flame length , hf, for the fire

  13. High-efficiency AlGaAs-GaAs Cassegrainian concentrator cells

    NASA Technical Reports Server (NTRS)

    Werthen, J. G.; Hamaker, H. C.; Virshup, G. F.; Lewis, C. R.; Ford, C. W.

    1985-01-01

    AlGaAs-GaAs heteroface space concentrator solar cells have been fabricated by metalorganic chemical vapor deposition. AMO efficiencies as high as 21.1% have been observed both for p-n and np structures under concentration (90 to 100X) at 25 C. Both cell structures are characterized by high quantum efficiencies and their performances are close to those predicted by a realistic computer model. In agreement with the computer model, the n-p cell exhibits a higher short-circuit current density.

  14. Developmental Changes in Learning: Computational Mechanisms and Social Influences

    PubMed Central

    Bolenz, Florian; Reiter, Andrea M. F.; Eppinger, Ben

    2017-01-01

    Our ability to learn from the outcomes of our actions and to adapt our decisions accordingly changes over the course of the human lifespan. In recent years, there has been an increasing interest in using computational models to understand developmental changes in learning and decision-making. Moreover, extensions of these models are currently applied to study socio-emotional influences on learning in different age groups, a topic that is of great relevance for applications in education and health psychology. In this article, we aim to provide an introduction to basic ideas underlying computational models of reinforcement learning and focus on parameters and model variants that might be of interest to developmental scientists. We then highlight recent attempts to use reinforcement learning models to study the influence of social information on learning across development. The aim of this review is to illustrate how computational models can be applied in developmental science, what they can add to our understanding of developmental mechanisms and how they can be used to bridge the gap between psychological and neurobiological theories of development. PMID:29250006

  15. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    NASA Astrophysics Data System (ADS)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  16. Flow and Turbulence Modeling and Computation of Shock Buffet Onset for Conventional and Supercritical Airfoils

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    1998-01-01

    Flow and turbulence models applied to the problem of shock buffet onset are studied. The accuracy of the interactive boundary layer and the thin-layer Navier-Stokes equations solved with recent upwind techniques using similar transport field equation turbulence models is assessed for standard steady test cases, including conditions having significant shock separation. The two methods are found to compare well in the shock buffet onset region of a supercritical airfoil that involves strong trailing-edge separation. A computational analysis using the interactive-boundary layer has revealed a Reynolds scaling effect in the shock buffet onset of the supercritical airfoil, which compares well with experiment. The methods are next applied to a conventional airfoil. Steady shock-separated computations of the conventional airfoil with the two methods compare well with experiment. Although the interactive boundary layer computations in the shock buffet region compare well with experiment for the conventional airfoil, the thin-layer Navier-Stokes computations do not. These findings are discussed in connection with possible mechanisms important in the onset of shock buffet and the constraints imposed by current numerical modeling techniques.

  17. Comparison of computer based instruction to behavior skills training for teaching staff implementation of discrete-trial instruction with an adult with autism.

    PubMed

    Nosik, Melissa R; Williams, W Larry; Garrido, Natalia; Lee, Sarah

    2013-01-01

    In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following training, participants were evaluated in terms of their accuracy on completing critical skills for running a discrete trial program. Six participants completed training; three received behavior skills training and three received the computer based training. Participants in the BST group performed better overall after training and during six week probes than those in the computer based training group. There were differences across both groups between research assistant and natural environment competency levels. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Impact design methods for ceramic components in gas turbine engines

    NASA Technical Reports Server (NTRS)

    Song, J.; Cuccio, J.; Kington, H.

    1991-01-01

    Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.

  19. Mechanisms of Reference Frame Selection in Spatial Term Use: Computational and Empirical Studies

    ERIC Educational Resources Information Center

    Schultheis, Holger; Carlson, Laura A.

    2017-01-01

    Previous studies have shown that multiple reference frames are available and compete for selection during the use of spatial terms such as "above." However, the mechanisms that underlie the selection process are poorly understood. In the current paper we present two experiments and a comparison of three computational models of selection…

  20. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  1. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  2. Computation of load performance and other parameters of extra high speed modified Lundell alternators from 3D-FE magnetic field solutions

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1992-01-01

    The combined magnetic vector potential - magnetic scalar potential method of computation of 3D magnetic fields by finite elements, introduced in a companion paper, in combination with state modeling in the abc-frame of reference, are used for global 3D magnetic field analysis and machine performance computation under rated load and overload condition in an example 14.3 kVA modified Lundell alternator. The results vividly demonstrate the 3D nature of the magnetic field in such machines, and show how this model can be used as an excellent tool for computation of flux density distributions, armature current and voltage waveform profiles and harmonic contents, as well as computation of torque profiles and ripples. Use of the model in gaining insight into locations of regions in the magnetic circuit with heavy degrees of saturation is demonstrated. Experimental results which correlate well with the simulations of the load case are given.

  3. Mutual coupling effects in antenna arrays, volume 1

    NASA Technical Reports Server (NTRS)

    Collin, R. E.

    1986-01-01

    Mutual coupling between rectangular apertures in a finite antenna array, in an infinite ground plane, is analyzed using the vector potential approach. The method of moments is used to solve the equations that result from setting the tangential magnetic fields across each aperture equal. The approximation uses a set of vector potential model functions to solve for equivalent magnetic currents. A computer program was written to carry out this analysis and the resulting currents were used to determine the co- and cross-polarized far zone radiation patterns. Numerical results for various arrays using several modes in the approximation are presented. Results for one and two aperture arrays are compared against published data to check on the agreement of this model with previous work. Computer derived results are also compared against experimental results to test the accuracy of the model. These tests of the accuracy of the program showed that it yields valid data.

  4. Development of a computational model for astronaut reorientation.

    PubMed

    Stirling, Leia; Willcox, Karen; Newman, Dava

    2010-08-26

    The ability to model astronaut reorientations computationally provides a simple way to develop and study human motion control strategies. Since the cost of experimenting in microgravity is high, and underwater training can lead to motions inappropriate for microgravity, these techniques allow for motions to be developed and well-understood prior to any microgravity exposure. By including a model of the current space suit, we have the ability to study both intravehicular and extravehicular activities. We present several techniques for rotating about the axes of the body and show that motions performed by the legs create a greater net rotation than those performed by the arms. Adding a space suit to the motions was seen to increase the resistance torque and limit the available range of motion. While rotations about the body axes can be performed in the current space suit, the resulting motions generated a reduced rotation when compared to the unsuited configuration. 2010 Elsevier Ltd. All rights reserved.

  5. Issues associated with modelling of proton exchange membrane fuel cell by computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Bednarek, Tomasz; Tsotridis, Georgios

    2017-03-01

    The objective of the current study is to highlight possible limitations and difficulties associated with Computational Fluid Dynamics in PEM single fuel cell modelling. It is shown that an appropriate convergence methodology should be applied for steady-state solutions, due to inherent numerical instabilities. A single channel fuel cell model has been taken as numerical example. Results are evaluated for quantitative as well qualitative points of view. The contribution to the polarization curve of the different fuel cell components such as bi-polar plates, gas diffusion layers, catalyst layers and membrane was investigated via their effects on the overpotentials. Furthermore, the potential losses corresponding to reaction kinetics, due to ohmic and mas transport limitations and the effect of the exchange current density and open circuit voltage, were also investigated. It is highlighted that the lack of reliable and robust input data is one of the issues for obtaining accurate results.

  6. Current Sheets in Pulsar Magnetospheres and Winds: Particle Acceleration and Pulsed Gamma Ray Emission

    NASA Astrophysics Data System (ADS)

    Arons, Jonathan

    The research proposed addresses understanding of the origin of non-thermal energy in the Universe, a subject beginning with the discovery of Cosmic Rays and continues, including the study of relativistic compact objects - neutron stars and black holes. Observed Rotation Powered Pulsars (RPPs) have rotational energy loss implying they have TeraGauss magnetic fields and electric potentials as large as 40 PetaVolts. The rotational energy lost is reprocessed into particles which manifest themselves in high energy gamma ray photon emission (GeV to TeV). Observations of pulsars from the FERMI Gamma Ray Observatory, launched into orbit in 2008, have revealed 130 of these stars (and still counting), thus demonstrating the presence of efficient cosmic accelerators within the strongly magnetized regions surrounding the rotating neutron stars. Understanding the physics of these and other Cosmic Accelerators is a major goal of astrophysical research. A new model for particle acceleration in the current sheets separating the closed and open field line regions of pulsars' magnetospheres, and separating regions of opposite magnetization in the relativistic winds emerging from those magnetopsheres, will be developed. The currents established in recent global models of the magnetosphere will be used as input to a magnetic field aligned acceleration model that takes account of the current carrying particles' inertia, generalizing models of the terrestrial aurora to the relativistic regime. The results will be applied to the spectacular new results from the FERMI gamma ray observatory on gamma ray pulsars, to probe the physics of the generation of the relativistic wind that carries rotational energy away from the compact stars, illuminating the whole problem of how compact objects can energize their surroundings. The work to be performed if this proposal is funded involves extending and developing concepts from plasma physics on dissipation of magnetic energy in thin sheets of electric current that separate regions of differing magnetization into the domain of highly relativistic magnetic fields - those with energy density large compared to the rest mass energy of the charged particles - the plasma - caught in that field. The investigators will create theoretical and computational models of the magnetic dissipation - a form of viscous flow in the thin sheets of electric current that form in the magnetized regions around the rotating stars - using Particle in-Cell plasma simulations. These simulations use a large computer to solve the equations of motion of many charged particles - millions to billions in the research that will be pursued - to unravel the dissipation of those fields and the acceleration of beams of particles in the thin sheets. The results will be incorporated into macroscopic MHD models of the magnetic structures around the stars which determine the location and strength of the current sheets, so as to model and analyze the pulsed gamma ray emission seen from hundreds of Rotation Powered Pulsars. The computational models will be assisted by ``pencil and paper'' theoretical modeling designed to motivate and interpret the computer simulations, and connect them to the observations.

  7. Bulk refrigeration of fruits and vegetables. Part 2: Computer algorithm for heat loads and moisture loss

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, B.; Misra, A.; Fricke, B.A.

    1997-12-31

    A computer algorithm was developed that estimates the latent and sensible heat loads due to the bulk refrigeration of fruits and vegetables. The algorithm also predicts the commodity moisture loss and temperature distribution which occurs during refrigeration. Part 1 focused upon the thermophysical properties of commodities and the flowfield parameters which govern the heat and mass transfer from fresh fruits and vegetables. This paper, Part 2, discusses the modeling methodology utilized in the current computer algorithm and describes the development of the heat and mass transfer models. Part 2 also compares the results of the computer algorithm to experimental datamore » taken from the literature and describes a parametric study which was performed with the algorithm. In addition, this paper also reviews existing numerical models for determining the heat and mass transfer in bulk loads of fruits and vegetables.« less

  8. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  9. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  10. The potential value of Clostridium difficile vaccine: an economic computer simulation model.

    PubMed

    Lee, Bruce Y; Popovich, Michael J; Tian, Ye; Bailey, Rachel R; Ufberg, Paul J; Wiringa, Ann E; Muder, Robert R

    2010-07-19

    Efforts are currently underway to develop a vaccine against Clostridium difficile infection (CDI). We developed two decision analytic Monte Carlo computer simulation models: (1) an Initial Prevention Model depicting the decision whether to administer C. difficile vaccine to patients at-risk for CDI and (2) a Recurrence Prevention Model depicting the decision whether to administer C. difficile vaccine to prevent CDI recurrence. Our results suggest that a C. difficile vaccine could be cost-effective over a wide range of C. difficile risk, vaccine costs, and vaccine efficacies especially, when being used post-CDI treatment to prevent recurrent disease. (c) 2010 Elsevier Ltd. All rights reserved.

  11. The Potential Value of Clostridium difficile Vaccine: An Economic Computer Simulation Model

    PubMed Central

    Lee, Bruce Y.; Popovich, Michael J.; Tian, Ye; Bailey, Rachel R.; Ufberg, Paul J.; Wiringa, Ann E.; Muder, Robert R.

    2010-01-01

    Efforts are currently underway to develop a vaccine against Clostridium difficile infection (CDI). We developed two decision analytic Monte Carlo computer simulation models: (1) an Initial Prevention Model depicting the decision whether to administer C. difficile vaccine to patients at-risk for CDI and (2) a Recurrence Prevention Model depicting the decision whether to administer C. difficile vaccine to prevent CDI recurrence. Our results suggest that a C. difficile vaccine could be cost-effective over a wide range of C. difficile risk, vaccine costs, and vaccine efficacies especially when being used post-CDI treatment to prevent recurrent disease. PMID:20541582

  12. Short-Term Forecasting of Radiation Belt and Ring Current

    NASA Technical Reports Server (NTRS)

    Fok, Mei-Ching

    2007-01-01

    A computer program implements a mathematical model of the radiation-belt and ring-current plasmas resulting from interactions between the solar wind and the Earth s magnetic field, for the purpose of predicting fluxes of energetic electrons (10 keV to 5 MeV) and protons (10 keV to 1 MeV), which are hazardous to humans and spacecraft. Given solar-wind and interplanetary-magnetic-field data as inputs, the program solves the convection-diffusion equations of plasma distribution functions in the range of 2 to 10 Earth radii. Phenomena represented in the model include particle drifts resulting from the gradient and curvature of the magnetic field; electric fields associated with the rotation of the Earth, convection, and temporal variation of the magnetic field; and losses along particle-drift paths. The model can readily accommodate new magnetic- and electric-field submodels and new information regarding physical processes that drive the radiation-belt and ring-current plasmas. Despite the complexity of the model, the program can be run in real time on ordinary computers. At present, the program can calculate present electron and proton fluxes; after further development, it should be able to predict the fluxes 24 hours in advance

  13. Theoretical basal Ca II fluxes for late-type stars: results from magnetic wave models with time-dependent ionization and multi-level radiation treatments

    NASA Astrophysics Data System (ADS)

    Fawzy, Diaa E.; Stȩpień, K.

    2018-03-01

    In the current study we present ab initio numerical computations of the generation and propagation of longitudinal waves in magnetic flux tubes embedded in the atmospheres of late-type stars. The interaction between convective turbulence and the magnetic structure is computed and the obtained longitudinal wave energy flux is used in a self-consistent manner to excite the small-scale magnetic flux tubes. In the current study we reduce the number of assumptions made in our previous studies by considering the full magnetic wave energy fluxes and spectra as well as time-dependent ionization (TDI) of hydrogen, employing multi-level Ca II atomic models, and taking into account departures from local thermodynamic equilibrium. Our models employ the recently confirmed value of the mixing-length parameter α=1.8. Regions with strong magnetic fields (magnetic filling factors of up to 50%) are also considered in the current study. The computed Ca II emission fluxes show a strong dependence on the magnetic filling factors, and the effect of time-dependent ionization (TDI) turns out to be very important in the atmospheres of late-type stars heated by acoustic and magnetic waves. The emitted Ca II fluxes with TDI included into the model are decreased by factors that range from 1.4 to 5.5 for G0V and M0V stars, respectively, compared to models that do not consider TDI. The results of our computations are compared with observations. Excellent agreement between the observed and predicted basal flux is obtained. The predicted trend of Ca II emission flux with magnetic filling factor and stellar surface temperature also agrees well with the observations but the calculated maximum fluxes for stars of different spectral types are about two times lower than observations. Though the longitudinal MHD waves considered here are important for chromosphere heating in high activity stars, additional heating mechanism(s) are apparently present.

  14. Discrete element weld model, phase 2

    NASA Technical Reports Server (NTRS)

    Prakash, C.; Samonds, M.; Singhal, A. K.

    1987-01-01

    A numerical method was developed for analyzing the tungsten inert gas (TIG) welding process. The phenomena being modeled include melting under the arc and the flow in the melt under the action of buoyancy, surface tension, and electromagnetic forces. The latter entails the calculation of the electric potential and the computation of electric current and magnetic field therefrom. Melting may occur at a single temperature or over a temperature range, and the electrical and thermal conductivities can be a function of temperature. Results of sample calculations are presented and discussed at length. A major research contribution has been the development of numerical methodology for the calculation of phase change problems in a fixed grid framework. The model has been implemented on CHAM's general purpose computer code PHOENICS. The inputs to the computer model include: geometric parameters, material properties, and weld process parameters.

  15. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    NASA Astrophysics Data System (ADS)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  16. USE OF BIOLOGICALLY BASED COMPUTATIONAL MODELING IN MODE OF ACTION-BASED RISK ASSESSMENT – AN EXAMPLE OF CHLOROFORM

    EPA Science Inventory

    The objective of current work is to develop a new cancer dose-response assessment for chloroform using a physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) model. The PBPK/PD model is based on a mode of action in which the cytolethality of chloroform occurs when the ...

  17. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  18. Systematic Applications of Metabolomics in Metabolic Engineering

    PubMed Central

    Dromms, Robert A.; Styczynski, Mark P.

    2012-01-01

    The goals of metabolic engineering are well-served by the biological information provided by metabolomics: information on how the cell is currently using its biochemical resources is perhaps one of the best ways to inform strategies to engineer a cell to produce a target compound. Using the analysis of extracellular or intracellular levels of the target compound (or a few closely related molecules) to drive metabolic engineering is quite common. However, there is surprisingly little systematic use of metabolomics datasets, which simultaneously measure hundreds of metabolites rather than just a few, for that same purpose. Here, we review the most common systematic approaches to integrating metabolite data with metabolic engineering, with emphasis on existing efforts to use whole-metabolome datasets. We then review some of the most common approaches for computational modeling of cell-wide metabolism, including constraint-based models, and discuss current computational approaches that explicitly use metabolomics data. We conclude with discussion of the broader potential of computational approaches that systematically use metabolomics data to drive metabolic engineering. PMID:24957776

  19. Lexical is as lexical does: computational approaches to lexical representation

    PubMed Central

    Woollams, Anna M.

    2015-01-01

    In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204

  20. Nature as a network of morphological infocomputational processes for cognitive agents

    NASA Astrophysics Data System (ADS)

    Dodig-Crnkovic, Gordana

    2017-01-01

    This paper presents a view of nature as a network of infocomputational agents organized in a dynamical hierarchy of levels. It provides a framework for unification of currently disparate understandings of natural, formal, technical, behavioral and social phenomena based on information as a structure, differences in one system that cause the differences in another system, and computation as its dynamics, i.e. physical process of morphological change in the informational structure. We address some of the frequent misunderstandings regarding the natural/morphological computational models and their relationships to physical systems, especially cognitive systems such as living beings. Natural morphological infocomputation as a conceptual framework necessitates generalization of models of computation beyond the traditional Turing machine model presenting symbol manipulation, and requires agent-based concurrent resource-sensitive models of computation in order to be able to cover the whole range of phenomena from physics to cognition. The central role of agency, particularly material vs. cognitive agency is highlighted.

  1. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    PubMed

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present computational models will provide new tools for predicting accurate functional properties and designing fibrous porous materials, thereby significantly advancing tissue engineering. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. PDF methods for turbulent reactive flows

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1995-01-01

    Viewgraphs are presented on computation of turbulent combustion, governing equations, closure problem, PDF modeling of turbulent reactive flows, validation cases, current projects, and collaboration with industry and technology transfer.

  3. Computational Models of Cognitive Control

    PubMed Central

    O’Reilly, Randall C.; Herd, Seth A.; Pauli, Wolfgang M.

    2010-01-01

    Cognitive control refers to the ability to perform task-relevant processing in the face of other distractions or other forms of interference, in the absence of strong environmental support. It depends on the integrity of the prefrontal cortex and associated biological structures (e.g., the basal ganglia). Computational models have played an influential role in developing our understanding of this system, and we review current developments in three major areas: dynamic gating of prefrontal representations, hierarchies in the prefrontal cortex, and reward, motivation, and goal-related processing in prefrontal cortex. Models in these and other areas are advancing the field further forward. PMID:20185294

  4. NREL Software Aids Offshore Wind Turbine Designs (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2013-10-01

    NREL researchers are supporting offshore wind power development with computer models that allow detailed analyses of both fixed and floating offshore wind turbines. While existing computer-aided engineering (CAE) models can simulate the conditions and stresses that a land-based wind turbine experiences over its lifetime, offshore turbines require the additional considerations of variations in water depth, soil type, and wind and wave severity, which also necessitate the use of a variety of support-structure types. NREL's core wind CAE tool, FAST, models the additional effects of incident waves, sea currents, and the foundation dynamics of the support structures.

  5. Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.

  6. Comparison of measured ozone in southeastern Virginia with computer predictions from a photochemical model

    NASA Technical Reports Server (NTRS)

    Wakelyn, N. T.; Gregory, G. L.

    1980-01-01

    Data for one day of the 1977 southeastern Virginia urban plume study are compared with computer predictions from a traveling air parcel model using a contemporary photochemical mechanism with a minimal description of nonmethane hydrocarbon (NMHC) constitution and chemistry. With measured initial NOx and O3 concentrations and a current separate estimate of urban source loading input to the model, and for a variation of initial NMHC over a reasonable range, an ozone increase over the day is predicted from the photochemical simulation which is consistent with the flight path averaged airborne data.

  7. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Computer model of Raritan River Basin water-supply system in central New Jersey

    USGS Publications Warehouse

    Dunne, Paul; Tasker, Gary D.

    1996-01-01

    This report describes a computer model of the Raritan River Basin water-supply system in central New Jersey. The computer model provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system during extended periods of below-average precipitation. The computer model is a continuity-accounting model consisting of a series of interconnected nodes. At each node, the inflow volume, outflow volume, and change in storage are determined and recorded for each month. The model runs with a given set of operating rules and water-use requirements including releases, pumpages, and diversions. The model can be used to assess the hypothetical performance of the Raritan River Basin water- supply system in past years under alternative sets of operating rules. It also can be used to forecast the likelihood of specified outcomes, such as the depletion of reservoir contents below a specified threshold or of streamflows below statutory minimum passing flows, for a period of up to 12 months. The model was constructed on the basis of current reservoir capacities and the natural, unregulated monthly runoff values recorded at U.S. Geological Survey streamflow- gaging stations in the basin.

  9. Comprehensive Modeling and Visualization of Cardiac Anatomy and Physiology from CT Imaging and Computer Simulations

    PubMed Central

    Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.

    2016-01-01

    In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663

  10. in vitro Models if Human Embryonic Mesenchymal Transitions in Morphogenesis

    EPA Science Inventory

    Our ability to predict human developmental consequences produced by exposure to environmental chemicals is limited by the current experimental and computational models.Human heart defects are among the most common type of birth defects and affect 1% of children (~40,000 children)...

  11. 14-qubit entanglement: creation and coherence

    NASA Astrophysics Data System (ADS)

    Barreiro, Julio

    2011-05-01

    We report the creation of multiparticle entangled states with up to 14 qubits. By investigating the coherence of up to 8 ions over time, we observe a decay proportional to the square of the number of qubits. The observed decay agrees with a theoretical model which assumes a system affected by correlated, Gaussian phase noise. This model holds for the majority of current experimental systems developed towards quantum computation and quantum metrology. We report the creation of multiparticle entangled states with up to 14 qubits. By investigating the coherence of up to 8 ions over time, we observe a decay proportional to the square of the number of qubits. The observed decay agrees with a theoretical model which assumes a system affected by correlated, Gaussian phase noise. This model holds for the majority of current experimental systems developed towards quantum computation and quantum metrology. Work done in collaboration with Thomas Monz, Philipp Schindler, Michael Chwalla, Daniel Nigg, William A. Coish, Maximilian Harlander, Wolfgang Haensel, Markus Hennrich, and Rainer Blatt.

  12. Mathematical model of the current density for the 30-cm engineering model thruster

    NASA Technical Reports Server (NTRS)

    Cuffel, R. F.

    1975-01-01

    Mathematical models are presented for both the singly and doubly charged ion current densities downstream of the 30-cm engineering model thruster with 0.5% compensated dished grids. These models are based on the experimental measurements of Vahrenkamp at a 2-amp ion beam operating condition. The cylindrically symmetric beam of constant velocity ions is modeled with continuous radial source and focusing functions across 'plane' grids with similar angular distribution functions. A computer program is used to evaluate the double integral for current densities in the near field and to obtain a far field approximation beyond 10 grid radii. The utility of the model is demonstrated for (1) calculating the directed thrust and (2) determining the impingement levels on various spacecraft surfaces from a two-axis gimballed, 2 x 3 thruster array.

  13. Prospective estimation of organ dose in CT under tube current modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Xiaoyu, E-mail: xt3@duke.edu; Li, Xiang; Segars, W. Paul

    Purpose: Computed tomography (CT) has been widely used worldwide as a tool for medical diagnosis and imaging. However, despite its significant clinical benefits, CT radiation dose at the population level has become a subject of public attention and concern. In this light, optimizing radiation dose has become a core responsibility for the CT community. As a fundamental step to manage and optimize dose, it may be beneficial to have accurate and prospective knowledge about the radiation dose for an individual patient. In this study, the authors developed a framework to prospectively estimate organ dose for chest and abdominopelvic CT examsmore » under tube current modulation (TCM). Methods: The organ dose is mainly dependent on two key factors: patient anatomy and irradiation field. A prediction process was developed to accurately model both factors. To model the anatomical diversity and complexity in the patient population, the authors used a previously developed library of computational phantoms with broad distributions of sizes, ages, and genders. A selected clinical patient, represented by a computational phantom in the study, was optimally matched with another computational phantom in the library to obtain a representation of the patient’s anatomy. To model the irradiation field, a previously validated Monte Carlo program was used to model CT scanner systems. The tube current profiles were modeled using a ray-tracing program as previously reported that theoretically emulated the variability of modulation profiles from major CT machine manufacturers Li et al., [Phys. Med. Biol. 59, 4525–4548 (2014)]. The prediction of organ dose was achieved using the following process: (1) CTDI{sub vol}-normalized-organ dose coefficients (h{sub organ}) for fixed tube current were first estimated as the prediction basis for the computational phantoms; (2) each computation phantom, regarded as a clinical patient, was optimally matched with one computational phantom in the library; (3) to account for the effect of the TCM scheme, a weighted organ-specific CTDI{sub vol} [denoted as (CTDI{sub vol}){sub organ,weighted}] was computed for each organ based on the TCM profile and the anatomy of the “matched” phantom; (4) the organ dose was predicted by multiplying the weighted organ-specific CTDI{sub vol} with the organ dose coefficients (h{sub organ}). To quantify the prediction accuracy, each predicted organ dose was compared with the corresponding organ dose simulated from the Monte Carlo program with the TCM profile explicitly modeled. Results: The predicted organ dose showed good agreements with the simulated organ dose across all organs and modulation profiles. The average percentage error in organ dose estimation was generally within 20% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. For an average CTDI{sub vol} of a CT exam of 10 mGy, the average error at full modulation strength (α = 1) across all organs was 0.91 mGy for chest exams, and 0.82 mGy for abdominopelvic exams. Conclusions: This study developed a quantitative model to predict organ dose for clinical chest and abdominopelvic scans. Such information may aid in the design of optimized CT protocols in relation to a targeted level of image quality.« less

  14. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.

  15. Fate of microplastics and mesoplastics carried by surface currents and wind waves: A numerical model approach in the Sea of Japan.

    PubMed

    Iwasaki, Shinsuke; Isobe, Atsuhiko; Kako, Shin'ichiro; Uchida, Keiichi; Tokai, Tadashi

    2017-08-15

    A numerical model was established to reproduce the oceanic transport processes of microplastics and mesoplastics in the Sea of Japan. A particle tracking model, where surface ocean currents were given by a combination of a reanalysis ocean current product and Stokes drift computed separately by a wave model, simulated particle movement. The model results corresponded with the field survey. Modeled results indicated the micro- and mesoplastics are moved northeastward by the Tsushima Current. Subsequently, Stokes drift selectively moves mesoplastics during winter toward the Japanese coast, resulting in increased contributions of mesoplastics south of 39°N. Additionally, Stokes drift also transports micro- and mesoplastics out to the sea area south of the subpolar front where the northeastward Tsushima Current carries them into the open ocean via the Tsugaru and Soya straits. Average transit time of modeled particles in the Sea of Japan is drastically reduced when including Stokes drift in the model. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Field-aligned current sources in the high-latitude ionosphere

    NASA Technical Reports Server (NTRS)

    Barbosa, D. D.

    1979-01-01

    The paper determines the electric potential in a plane which is fed current from a pair of field-aligned current sheets. The ionospheric conductivity is modelled as a constant with an enhanced conductivity annular ring. It is shown that field-aligned current distributions are arbitrary functions of azimuth angle (MLT) and thus allow for asymmetric potential configurations over the pole cap. In addition, ionospheric surface currents are computed by means of stream functions. Finally, the discussion relates these methods to the electrical characteristics of the magnetosphere.

  17. Towards cortex sized artificial neural systems.

    PubMed

    Johansson, Christopher; Lansner, Anders

    2007-01-01

    We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns. Both a floating- and fixed-point arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44% and 23% of real-time respectively. Further, an instance of the model with 1.6 x 10(6) units and 2 x 10(11) connections performed noise reduction and pattern completion. These implementations represent the current frontier of large-scale abstract neural network simulations in terms of network size and running speed.

  18. Physiological and modeling evidence for focal transcranial electrical brain stimulation in humans: A basis for high-definition tDCS

    PubMed Central

    Edwards, Dylan; Cortes, Mar; Datta, Abhishek; Minhas, Preet; Wassermann, Eric M.; Bikson, Marom

    2015-01-01

    Transcranial Direct Current Stimulation (tDCS) is a non-invasive, low-cost, well-tolerated technique producing lasting modulation of cortical excitability. Behavioral and therapeutic outcomes of tDCS are linked to the targeted brain regions, but there is little evidence that current reaches the brain as intended. We aimed to: (1) validate a computational model for estimating cortical electric fields in human transcranial stimulation, and (2) assess the magnitude and spread of cortical electric field with a novel High-Definition tDCS (HD-tDCS) scalp montage using a 4×1-Ring electrode configuration. In three healthy adults, Transcranial Electrical Stimulation (TES) over primary motor cortex (M1) was delivered using the 4×1 montage (4× cathode, surrounding a single central anode; montage radius ~3 cm) with sufficient intensity to elicit a discrete muscle twitch in the hand. The estimated current distribution in M1 was calculated using the individualized MRI-based model, and compared with the observed motor response across subjects. The response magnitude was quantified with stimulation over motor cortex as well as anterior and posterior to motor cortex. In each case the model data were consistent with the motor response across subjects. The estimated cortical electric fields with the 4×1 montage were compared (area, magnitude, direction) for TES and tDCS in each subject. We provide direct evidence in humans that TES with a 4×1-Ring configuration can activate motor cortex and that current does not substantially spread outside the stimulation area. Computational models predict that both TES and tDCS waveforms using the 4×1-Ring configuration generate electric fields in cortex with comparable gross current distribution, and preferentially directed normal (inward) currents. The agreement of modeling and experimental data for both current delivery and focality support the use of the HD-tDCS 4×1-Ring montage for cortically targeted neuromodulation. PMID:23370061

  19. Fractal model of polarization switching kinetics in ferroelectrics under nonequilibrium conditions of electron irradiation

    NASA Astrophysics Data System (ADS)

    Maslovskaya, A. G.; Barabash, T. K.

    2018-03-01

    The paper presents the results of the fractal and multifractal analysis of polarization switching current in ferroelectrics under electron irradiation, which allows statistical memory effects to be estimated at dynamics of domain structure. The mathematical model of formation of electron beam-induced polarization current in ferroelectrics was suggested taking into account the fractal nature of domain structure dynamics. In order to realize the model the computational scheme was constructed using the numerical solution approximation of fractional differential equation. Evidences of electron beam-induced polarization switching process in ferroelectrics were specified at a variation of control model parameters.

  20. Progress on the FabrIc for Frontier Experiments project at Fermilab

    DOE PAGES

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; ...

    2015-12-23

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercialmore » cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. Hence, the progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide« less

  1. The FIFE Project at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Box, D.; Boyd, J.; Di Benedetto, V.

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less

  2. LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W

    2008-01-01

    Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less

  3. Modeling of a carbon nanotube ultracapacitor.

    PubMed

    Orphanou, Antonis; Yamada, Toshishige; Yang, Cary Y

    2012-03-09

    The modeling of carbon nanotube ultracapacitor (CNU) performance based on the simulation of electrolyte ion motion between the cathode and the anode is described. Using a molecular dynamics (MD) approach, the equilibrium positions of the electrode charges interacting through the Coulomb potential are determined, which in turn yield the equipotential surface and electric field associated with the capacitor. With an applied ac voltage, the current is computed based on the nanotube and electrolyte particle distribution and interaction, resulting in the frequency-dependent impedance Z(ω). From the current and impedance profiles, the Nyquist and cyclic voltammetry (CV) plots are then extracted. The results of these calculations compare well with existing experimental data. A lumped-element equivalent circuit for the CNU is proposed and the impedance computed from this circuit correlates well with the simulated and measured impedances.

  4. The Application of Jason-1 Measurements to Estimate the Global Near Surface Ocean Circulation for Climate Research

    NASA Technical Reports Server (NTRS)

    Niiler, Peran P.

    2004-01-01

    The scientific objective of this research program was to utilize drifter, Jason-1 altimeter data and a variety of wind data for the determination of time mean and time variable wind driven surface currents of the global ocean. To accomplish this task has required the interpolation of 6-hourly winds on drifter tracks and the computation of the wind coherent motions of the drifters. These calculations showed that the Ekman current model proposed by Ralph and Niiler for the tropical Pacific was valid for all the oceans south of 40N latitude. Improvements to RN99 model were computed and poster presentations of the results were given in several ocean science venues, including the November 2004 GODAY meeting in St. Petersburg, FL.

  5. The diagnosis and forecast system of hydrometeorological characteristics for the White, Barents, Kara and Pechora Seas

    NASA Astrophysics Data System (ADS)

    Fomin, Vladimir; Diansky, Nikolay; Gusev, Anatoly; Kabatchenko, Ilia; Panasenkova, Irina

    2017-04-01

    The diagnosis and forecast system for simulating hydrometeorological characteristics of the Russian Western Arctic seas is presented. It performs atmospheric forcing computation with the regional non-hydrostatic atmosphere model Weather Research and Forecasting model (WRF) with spatial resolution 15 km, as well as computation of circulation, sea level, temperature, salinity and sea ice with the marine circulation model INMOM (Institute of Numerical Mathematics Ocean Model) with spatial resolution 2.7 km, and the computation of wind wave parameters using the Russian wind-wave model (RWWM) with spatial resolution 5 km. Verification of the meteorological characteristics is done for air temperature, air pressure, wind velocity, water temperature, currents, sea level anomaly, wave characteristics such as wave height and wave period. The results of the hydrometeorological characteristic verification are presented for both retrospective and forecast computations. The retrospective simulation of the hydrometeorological characteristics for the White, Barents, Kara and Pechora Seas was performed with the diagnosis and forecast system for the period 1986-2015. The important features of the Kara Sea circulation are presented. Water exchange between Pechora and Kara Seas is described. The importance is shown of using non-hydrostatic atmospheric circulation model for the atmospheric forcing computation in coastal areas. According to the computation results, extreme values of hydrometeorological characteristics were obtained for the Russian Western Arctic seas.

  6. A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)

    PubMed Central

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585

  7. A Study of Current Trends and Issues Related to Technical/Engineering Design Graphics.

    ERIC Educational Resources Information Center

    Clark, Aaron C.; Scales Alice

    2000-01-01

    Presents results from a survey of engineering design graphics educators who responded to questions related to current trends and issues in the profession of graphics education. Concludes that there is a clear trend in institutions towards the teaching of constraint-based modeling and computer-aided manufacturing. (Author/YDS)

  8. Regulatory Technology Development Plan - Sodium Fast Reactor. Mechanistic Source Term - Trial Calculation. Work Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-02-01

    The overall objective of the SFR Regulatory Technology Development Plan (RTDP) effort is to identify and address potential impediments to the SFR regulatory licensing process. In FY14, an analysis by Argonne identified the development of an SFR-specific MST methodology as an existing licensing gap with high regulatory importance and a potentially long lead-time to closure. This work was followed by an initial examination of the current state-of-knowledge regarding SFR source term development (ANLART-3), which reported several potential gaps. Among these were the potential inadequacies of current computational tools to properly model and assess the transport and retention of radionuclides duringmore » a metal fuel pool-type SFR core damage incident. The objective of the current work is to determine the adequacy of existing computational tools, and the associated knowledge database, for the calculation of an SFR MST. To accomplish this task, a trial MST calculation will be performed using available computational tools to establish their limitations with regard to relevant radionuclide release/retention/transport phenomena. The application of existing modeling tools will provide a definitive test to assess their suitability for an SFR MST calculation, while also identifying potential gaps in the current knowledge base and providing insight into open issues regarding regulatory criteria/requirements. The findings of this analysis will assist in determining future research and development needs.« less

  9. Eddy Current Influences on the Dynamic Behaviour of Magnetic Suspension Systems

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.; Bloodgood, Dale V.

    1998-01-01

    This report will summarize some results from a multi-year research effort at NASA Langley Research Center aimed at the development of an improved capability for practical modelling of eddy current effects in magnetic suspension systems. Particular attention is paid to large-gap systems, although generic results applicable to both large-gap and small-gap systems are presented. It is shown that eddy currents can significantly affect the dynamic behavior of magnetic suspension systems, but that these effects can be amenable to modelling and measurement. Theoretical frameworks are presented, together with comparisons of computed and experimental data particularly related to the Large Angle Magnetic Suspension Test Fixture at NASA Langley Research Center, and the Annular Suspension and Pointing System at Old Dominion University. In both cases, practical computations are capable of providing reasonable estimates of important performance-related parameters. The most difficult case is seen to be that of eddy currents in highly permeable material, due to the low skin depths. Problems associated with specification of material properties and areas for future research are discussed.

  10. Computational Modeling Reveals Key Contributions of KCNQ and hERG Currents to the Malleability of Uterine Action Potentials Underpinning Labor

    PubMed Central

    Tong, Wing-Chiu; Tribe, Rachel M.; Smith, Roger; Taggart, Michael J.

    2014-01-01

    The electrical excitability of uterine smooth muscle cells is a key determinant of the contraction of the organ during labor and is manifested by spontaneous, periodic action potentials (APs). Near the end of term, APs vary in shape and size reflecting an ability to change the frequency, duration and amplitude of uterine contractions. A recent mathematical model quantified several ionic features of the electrical excitability in uterine smooth muscle cells. It replicated many of the experimentally recorded uterine AP configurations but its limitations were evident when trying to simulate the long-duration bursting APs characteristic of labor. A computational parameter search suggested that delayed rectifying K+ currents could be a key model component requiring improvement to produce the longer-lasting bursting APs. Of the delayed rectifying K+ currents family it is of interest that KCNQ and hERG channels have been reported to be gestationally regulated in the uterus. These currents exhibit features similar to the broadly defined uterine I K1 of the original mathematical model. We thus formulated new quantitative descriptions for several I KCNQ and I hERG. Incorporation of these currents into the uterine cell model enabled simulations of the long-lasting bursting APs. Moreover, we used this modified model to simulate the effects of different contributions of I KCNQ and I hERG on AP form. Our findings suggest that the alterations in expression of hERG and KCNQ channels can potentially provide a mechanism for fine tuning of AP forms that lends a malleability for changing between plateau-like and long-lasting bursting-type APs as uterine cells prepare for parturition. PMID:25474527

  11. Interoperability of Neuroscience Modeling Software

    PubMed Central

    Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik

    2009-01-01

    Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374

  12. A spiking network model of cerebellar Purkinje cells and molecular layer interneurons exhibiting irregular firing

    PubMed Central

    Lennon, William; Hecht-Nielsen, Robert; Yamazaki, Tadashi

    2014-01-01

    While the anatomy of the cerebellar microcircuit is well-studied, how it implements cerebellar function is not understood. A number of models have been proposed to describe this mechanism but few emphasize the role of the vast network Purkinje cells (PKJs) form with the molecular layer interneurons (MLIs)—the stellate and basket cells. We propose a model of the MLI-PKJ network composed of simple spiking neurons incorporating the major anatomical and physiological features. In computer simulations, the model reproduces the irregular firing patterns observed in PKJs and MLIs in vitro and a shift toward faster, more regular firing patterns when inhibitory synaptic currents are blocked. In the model, the time between PKJ spikes is shown to be proportional to the amount of feedforward inhibition from an MLI on average. The two key elements of the model are: (1) spontaneously active PKJs and MLIs due to an endogenous depolarizing current, and (2) adherence to known anatomical connectivity along a parasagittal strip of cerebellar cortex. We propose this model to extend previous spiking network models of the cerebellum and for further computational investigation into the role of irregular firing and MLIs in cerebellar learning and function. PMID:25520646

  13. Application of Psychological Theories in Agent-Based Modeling: The Case of the Theory of Planned Behavior.

    PubMed

    Scalco, Andrea; Ceschi, Andrea; Sartori, Riccardo

    2018-01-01

    It is likely that computer simulations will assume a greater role in the next future to investigate and understand reality (Rand & Rust, 2011). Particularly, agent-based models (ABMs) represent a method of investigation of social phenomena that blend the knowledge of social sciences with the advantages of virtual simulations. Within this context, the development of algorithms able to recreate the reasoning engine of autonomous virtual agents represents one of the most fragile aspects and it is indeed crucial to establish such models on well-supported psychological theoretical frameworks. For this reason, the present work discusses the application case of the theory of planned behavior (TPB; Ajzen, 1991) in the context of agent-based modeling: It is argued that this framework might be helpful more than others to develop a valid representation of human behavior in computer simulations. Accordingly, the current contribution considers issues related with the application of the model proposed by the TPB inside computer simulations and suggests potential solutions with the hope to contribute to shorten the distance between the fields of psychology and computer science.

  14. Computer-aided design of the human aortic root.

    PubMed

    Ovcharenko, E A; Klyshnikov, K U; Vlad, A R; Sizova, I N; Kokov, A N; Nushtaev, D V; Yuzhalin, A E; Zhuravleva, I U

    2014-11-01

    The development of computer-based 3D models of the aortic root is one of the most important problems in constructing the prostheses for transcatheter aortic valve implantation. In the current study, we analyzed data from 117 patients with and without aortic valve disease and computed tomography data from 20 patients without aortic valvular diseases in order to estimate the average values of the diameter of the aortic annulus and other aortic root parameters. Based on these data, we developed a 3D model of human aortic root with unique geometry. Furthermore, in this study we show that by applying different material properties to the aortic annulus zone in our model, we can significantly improve the quality of the results of finite element analysis. To summarize, here we present four 3D models of human aortic root with unique geometry based on computational analysis of ECHO and CT data. We suggest that our models can be utilized for the development of better prostheses for transcatheter aortic valve implantation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. An EMTP system level model of the PMAD DC test bed

    NASA Technical Reports Server (NTRS)

    Dravid, Narayan V.; Kacpura, Thomas J.; Tam, Kwa-Sur

    1991-01-01

    A power management and distribution direct current (PMAD DC) test bed was set up at the NASA Lewis Research Center to investigate Space Station Freedom Electric Power Systems issues. Efficiency of test bed operation significantly improves with a computer simulation model of the test bed as an adjunct tool of investigation. Such a model is developed using the Electromagnetic Transients Program (EMTP) and is available to the test bed developers and experimenters. The computer model is assembled on a modular basis. Device models of different types can be incorporated into the system model with only a few lines of code. A library of the various model types is created for this purpose. Simulation results and corresponding test bed results are presented to demonstrate model validity.

  16. Dealing with Diversity in Computational Cancer Modeling

    PubMed Central

    Johnson, David; McKeever, Steve; Stamatakos, Georgios; Dionysiou, Dimitra; Graf, Norbert; Sakkalis, Vangelis; Marias, Konstantinos; Wang, Zhihui; Deisboeck, Thomas S.

    2013-01-01

    This paper discusses the need for interconnecting computational cancer models from different sources and scales within clinically relevant scenarios to increase the accuracy of the models and speed up their clinical adaptation, validation, and eventual translation. We briefly review current interoperability efforts drawing upon our experiences with the development of in silico models for predictive oncology within a number of European Commission Virtual Physiological Human initiative projects on cancer. A clinically relevant scenario, addressing brain tumor modeling that illustrates the need for coupling models from different sources and levels of complexity, is described. General approaches to enabling interoperability using XML-based markup languages for biological modeling are reviewed, concluding with a discussion on efforts towards developing cancer-specific XML markup to couple multiple component models for predictive in silico oncology. PMID:23700360

  17. Combining wet and dry research: experience with model development for cardiac mechano-electric structure-function studies

    PubMed Central

    Quinn, T. Alexander; Kohl, Peter

    2013-01-01

    Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215

  18. High resolution modelling and observation of wind-driven surface currents in a semi-enclosed estuary

    NASA Astrophysics Data System (ADS)

    Nash, S.; Hartnett, M.; McKinstry, A.; Ragnoli, E.; Nagle, D.

    2012-04-01

    Hydrodynamic circulation in estuaries is primarily driven by tides, river inflows and surface winds. While tidal and river data can be quite easily obtained for input to hydrodynamic models, sourcing accurate surface wind data is problematic. Firstly, the wind data used in hydrodynamic models is usually measured on land and can be quite different in magnitude and direction from offshore winds. Secondly, surface winds are spatially-varying but due to a lack of data it is common practice to specify a non-varying wind speed and direction across the full extents of a model domain. These problems can lead to inaccuracies in the surface currents computed by three-dimensional hydrodynamic models. In the present research, a wind forecast model is coupled with a three-dimensional numerical model of Galway Bay, a semi-enclosed estuary on the west coast of Ireland, to investigate the effect of surface wind data resolution on model accuracy. High resolution and low resolution wind fields are specified to the model and the computed surface currents are compared with high resolution surface current measurements obtained from two high frequency SeaSonde-type Coastal Ocean Dynamics Applications Radars (CODAR). The wind forecast models used for the research are Harmonie cy361.3, running on 2.5 and 0.5km spatial grids for the low resolution and high resolution models respectively. The low-resolution model runs over an Irish domain on 540x500 grid points with 60 vertical levels and a 60s timestep and is driven by ECMWF boundary conditions. The nested high-resolution model uses 300x300 grid points on 60 vertical levels and a 12s timestep. EFDC (Environmental Fluid Dynamics Code) is used for the hydrodynamic model. The Galway Bay model has ten vertical layers and is resolved spatially and temporally at 150m and 4 sec respectively. The hydrodynamic model is run for selected hindcast dates when wind fields were highly energetic. Spatially- and temporally-varying wind data is provided by offline coupling with the wind forecast models. Modelled surface currents show good correlation with CODAR observed currents and the resolution of the surface wind data is shown to be important for model accuracy.

  19. Efficient prediction of terahertz quantum cascade laser dynamics from steady-state simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnew, G.; Lim, Y. L.; Nikolić, M.

    2015-04-20

    Terahertz-frequency quantum cascade lasers (THz QCLs) based on bound-to-continuum active regions are difficult to model owing to their large number of quantum states. We present a computationally efficient reduced rate equation (RE) model that reproduces the experimentally observed variation of THz power with respect to drive current and heat-sink temperature. We also present dynamic (time-domain) simulations under a range of drive currents and predict an increase in modulation bandwidth as the current approaches the peak of the light–current curve, as observed experimentally in mid-infrared QCLs. We account for temperature and bias dependence of the carrier lifetimes, gain, and injection efficiency,more » calculated from a full rate equation model. The temperature dependence of the simulated threshold current, emitted power, and cut-off current are thus all reproduced accurately with only one fitting parameter, the interface roughness, in the full REs. We propose that the model could therefore be used for rapid dynamical simulation of QCL designs.« less

  20. The Bayesian Revolution Approaches Psychological Development

    ERIC Educational Resources Information Center

    Shultz, Thomas R.

    2007-01-01

    This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…

  1. Practical Issues in Interactive Multimedia Design.

    ERIC Educational Resources Information Center

    James, Jeff

    This paper describes a range of computer assisted learning software models--linear, unstructured, and ideal--and discusses issues such as control, interactivity, and ease-of-programming. It also introduces a "compromise model" used for a package currently under development at the Hong Kong Polytechnic University, which is intended to…

  2. The Use of a Computer-Controlled Random Access Slide Projector for Rapid Information Display.

    ERIC Educational Resources Information Center

    Muller, Mark T.

    A 35mm random access slide projector operated in conjunction with a computer terminal was adapted to meet the need for a more rapid and complex graphic display mechanism than is currently available with teletypewriter terminals. The model projector can be operated manually to provide for a maintenance checkout of the electromechanical system.…

  3. Taking the Mystery Out of Research in Computing Information Systems: A New Approach to Teaching Research Paradigm Architecture.

    ERIC Educational Resources Information Center

    Heslin, J. Alexander, Jr.

    In senior-level undergraduate research courses in Computer Information Systems (CIS), students are required to read and assimilate a large volume of current research literature. One course objective is to demonstrate to the student that there are patterns or models or paradigms of research. A new approach in identifying research paradigms is…

  4. Computer-Aided Air-Traffic Control In The Terminal Area

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    Developmental computer-aided system for automated management and control of arrival traffic at large airport includes three integrated subsystems. One subsystem, called Traffic Management Advisor, another subsystem, called Descent Advisor, and third subsystem, called Final Approach Spacing Tool. Data base that includes current wind measurements and mathematical models of performances of types of aircraft contributes to effective operation of system.

  5. An Assessment of Feedback Procedures and Information Provided to Instructors within Computer Managed Learning Environments--Implications for Instruction and Software Redesign.

    ERIC Educational Resources Information Center

    Kotesky, Arturo A.

    Feedback procedures and information provided to instructors within computer managed learning environments were assessed to determine current usefulness and meaningfulness to users, and to present the design of a different instructor feedback instrument. Kaufman's system model was applied to accomplish the needs assessment phase of the study; and…

  6. CFD Methods and Tools for Multi-Element Airfoil Analysis

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; George, Michael W. (Technical Monitor)

    1995-01-01

    This lecture will discuss the computational tools currently available for high-lift multi-element airfoil analysis. It will present an overview of a number of different numerical approaches, their current capabilities, short-comings, and computational costs. The lecture will be limited to viscous methods, including inviscid/boundary layer coupling methods, and incompressible and compressible Reynolds-averaged Navier-Stokes methods. Both structured and unstructured grid generation approaches will be presented. Two different structured grid procedures are outlined, one which uses multi-block patched grids, the other uses overset chimera grids. Turbulence and transition modeling will be discussed.

  7. Expected orbit determination performance for the TOPEX/Poseidon mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nerem, R.S.; Putney, B.H.; Marshall, J.A.

    1993-03-01

    The TOPEX/Poseidon (T/P) mission, launched during the summer of 1992, has the requirement that the radial component of its orbit must be computed to an accuracy of 13 cm root-mean-square (rms) or better, allowing measurements of the sea surface height to be computed to similar accuracy when the satellite height is differenced with the altimeter measurements. This will be done by combining precise satellite tracking measurements with precise models of the forces acting on the satellite. The Space Geodesy Branch at Goddard Space Flight Center (GSFC), as part of the T/P precision orbit determination (POD) Team, has the responsibility withinmore » NASA for the T/P precise orbit computations. The prelaunch activities of the T/P POD Team have been mainly directed towards developing improved models of the static and time-varying gravitational forces acting on T/P and precise models for the non-conservative forces perturbing the orbit of T/P such as atmospheric drag, solar and Earth radiation pressure, and thermal imbalances. The radial orbit error budget for T/P allows 10 cm rms error due to gravity field mismodeling, 3 cm due to solid Earth and ocean tides, 6 cm due to radiative forces, and 3 cm due to atmospheric drag. A prelaunch assessment of the current modeling accuracies for these forces indicates that the radial orbit error requirements can be achieved with the current models, and can probably be surpassed once T/P tracking data are used to fine tune the models. Provided that the performance of the T/P spacecraft is nominal, the precise orbits computed by the T/P POD Team should be accurate to 13 cm or better radially.« less

  8. A computational model of self-efficacy's various effects on performance: Moving the debate forward.

    PubMed

    Vancouver, Jeffrey B; Purl, Justin D

    2017-04-01

    Self-efficacy, which is one's belief in one's capacity, has been found to both positively and negatively influence effort and performance. The reasons for these different effects have been a major topic of debate among social-cognitive and perceptual control theorists. In particular, the findings of various self-efficacy effects has been motivated by a perceptual control theory view of self-regulation that social-cognitive theorists' question. To provide more clarity to the theoretical arguments, a computational model of the multiple processes presumed to create the positive, negative, and null effects for self-efficacy is presented. Building on an existing computational model of goal choice that produces a positive effect for self-efficacy, the current article adds a symbolic processing structure used during goal striving that explains the negative self-efficacy effect observed in recent studies. Moreover, the multiple processes, operating together, allow the model to recreate the various effects found in a published study of feedback ambiguity's moderating role on the self-efficacy to performance relationship (Schmidt & DeShon, 2010). Discussion focuses on the implications of the model for the self-efficacy debate, alternative computational models, the overlap between control theory and social-cognitive theory explanations, the value of using computational models for resolving theoretical disputes, and future research and directions the model inspires. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Modeling Ni-Cd performance. Planned alterations to the Goddard battery model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1986-01-01

    The Goddard Space Flight Center (GSFC) currently has a preliminary computer model to simulate a Nickel Cadmium (Ni-Cd) performance. The basic methodology of the model was described in the paper entitled Fundamental Algorithms of the Goddard Battery Model. At present, the model is undergoing alterations to increase its efficiency, accuracy, and generality. A review of the present battery model is given, and the planned charges of the model are described.

  10. Computer animation for minimally invasive surgery: computer system requirements and preferred implementations

    NASA Astrophysics Data System (ADS)

    Pieper, Steven D.; McKenna, Michael; Chen, David; McDowall, Ian E.

    1994-04-01

    We are interested in the application of computer animation to surgery. Our current project, a navigation and visualization tool for knee arthroscopy, relies on real-time computer graphics and the human interface technologies associated with virtual reality. We believe that this new combination of techniques will lead to improved surgical outcomes and decreased health care costs. To meet these expectations in the medical field, the system must be safe, usable, and cost-effective. In this paper, we outline some of the most important hardware and software specifications in the areas of video input and output, spatial tracking, stereoscopic displays, computer graphics models and libraries, mass storage and network interfaces, and operating systems. Since this is a fairly new combination of technologies and a new application, our justification for our specifications are drawn from the current generation of surgical technology and by analogy to other fields where virtual reality technology has been more extensively applied and studied.

  11. QuEST for malware type-classification

    NASA Astrophysics Data System (ADS)

    Vaughan, Sandra L.; Mills, Robert F.; Grimaila, Michael R.; Peterson, Gilbert L.; Oxley, Mark E.; Dube, Thomas E.; Rogers, Steven K.

    2015-05-01

    Current cyber-related security and safety risks are unprecedented, due in no small part to information overload and skilled cyber-analyst shortages. Advances in decision support and Situation Awareness (SA) tools are required to support analysts in risk mitigation. Inspired by human intelligence, research in Artificial Intelligence (AI) and Computational Intelligence (CI) have provided successful engineering solutions in complex domains including cyber. Current AI approaches aggregate large volumes of data to infer the general from the particular, i.e. inductive reasoning (pattern-matching) and generally cannot infer answers not previously programmed. Whereas humans, rarely able to reason over large volumes of data, have successfully reached the top of the food chain by inferring situations from partial or even partially incorrect information, i.e. abductive reasoning (pattern-completion); generating a hypothetical explanation of observations. In order to achieve an engineering advantage in computational decision support and SA we leverage recent research in human consciousness, the role consciousness plays in decision making, modeling the units of subjective experience which generate consciousness, qualia. This paper introduces a novel computational implementation of a Cognitive Modeling Architecture (CMA) which incorporates concepts of consciousness. We apply our model to the malware type-classification task. The underlying methodology and theories are generalizable to many domains.

  12. Multiscale modeling and computation of optically manipulated nano devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Gang, E-mail: baog@zju.edu.cn; Liu, Di, E-mail: richardl@math.msu.edu; Luo, Songting, E-mail: luos@iastate.edu

    2016-07-01

    We present a multiscale modeling and computational scheme for optical-mechanical responses of nanostructures. The multi-physical nature of the problem is a result of the interaction between the electromagnetic (EM) field, the molecular motion, and the electronic excitation. To balance accuracy and complexity, we adopt the semi-classical approach that the EM field is described classically by the Maxwell equations, and the charged particles follow the Schrödinger equations quantum mechanically. To overcome the numerical challenge of solving the high dimensional multi-component many-body Schrödinger equations, we further simplify the model with the Ehrenfest molecular dynamics to determine the motion of the nuclei, andmore » use the Time-Dependent Current Density Functional Theory (TD-CDFT) to calculate the excitation of the electrons. This leads to a system of coupled equations that computes the electromagnetic field, the nuclear positions, and the electronic current and charge densities simultaneously. In the regime of linear responses, the resonant frequencies initiating the out-of-equilibrium optical-mechanical responses can be formulated as an eigenvalue problem. A self-consistent multiscale method is designed to deal with the well separated space scales. The isomerization of azobenzene is presented as a numerical example.« less

  13. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  14. Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual

    NASA Technical Reports Server (NTRS)

    Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.

    1986-01-01

    The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.

  15. Lightning NOx Statistics Derived by NASA Lightning Nitrogen Oxides Model (LNOM) Data Analyses

    NASA Technical Reports Server (NTRS)

    Koshak, William; Peterson, Harold

    2013-01-01

    What is the LNOM? The NASA Marshall Space Flight Center (MSFC) Lightning Nitrogen Oxides Model (LNOM) [Koshak et al., 2009, 2010, 2011; Koshak and Peterson 2011, 2013] analyzes VHF Lightning Mapping Array (LMA) and National Lightning Detection Network(TradeMark) (NLDN) data to estimate the lightning nitrogen oxides (LNOx) produced by individual flashes. Figure 1 provides an overview of LNOM functionality. Benefits of LNOM: (1) Does away with unrealistic "vertical stick" lightning channel models for estimating LNOx; (2) Uses ground-based VHF data that maps out the true channel in space and time to < 100 m accuracy; (3) Therefore, true channel segment height (ambient air density) is used to compute LNOx; (4) True channel length is used! (typically tens of kilometers since channel has many branches and "wiggles"); (5) Distinction between ground and cloud flashes are made; (6) For ground flashes, actual peak current from NLDN used to compute NOx from lightning return stroke; (7) NOx computed for several other lightning discharge processes (based on Cooray et al., 2009 theory): (a) Hot core of stepped leaders and dart leaders, (b) Corona sheath of stepped leader, (c) K-change, (d) Continuing Currents, and (e) M-components; and (8) LNOM statistics (see later) can be used to parameterize LNOx production for regional air quality models (like CMAQ), and for global chemical transport models (like GEOS-Chem).

  16. Simulation studies of carbon nanotube field-effect transistors

    NASA Astrophysics Data System (ADS)

    John, David Llewellyn

    Simulation studies of carbon nanotube field-effect transistors (CNFETs) are presented using models of increasing rigour and versatility that have been systematically developed. Firstly, it is demonstrated how one may compute the standard tight-binding band structure. From this foundation, a self-consistent solution for computing the equilibrium energy band diagram of devices with Schottky-barrier source and drain contacts is developed. While this does provide insight into the likely behaviour of CNFETs, a non-equilibrium model is required in order to predict the current-voltage relation. To this end, the effective-mass approximation is utilized, where a parabolic fit to the band structure is used in order to develop a Schrodinger-Poisson solver. This model is employed to predict both DC behaviour and switching times for CNFETs, and was one of the first models that captured quantum effects, such as tunneling and resonance, in these devices. In addition, this model has been used in order to validate compact models that incorporated tunneling via the WKB approximation. A modified WKB derivation is provided in order to account for the non-zero reflection of carriers above a potential energy step. In order to allow for greater flexibility in the CNFET geometries, and to lift the effective-mass approximation, a non-equilibrium Green's function method is finally developed, which uses an atomistic tight-binding Hamiltonian to model doped-contact, as opposed to Schottky-barrier-contact, devices. This approach benefits by being able to account for both inter- and intra-band tunneling, and by utilizing a quadratic matrix equation in order to improve the computation time for the required self-energy matrices. Within this technique, an expression for the local inter-atomic current is derived in order to provide more detailed information than the usual compact expression for the terminal current. With this final model, an investigation is presented into the effects of geometrical variations, contact thicknesses, and azimuthal variation in the surface potential of the nanotube.

  17. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  18. The Human Brain Project and neuromorphic computing

    PubMed Central

    Calimera, Andrea; Macii, Enrico; Poncino, Massimo

    Summary Understanding how the brain manages billions of processing units connected via kilometers of fibers and trillions of synapses, while consuming a few tens of Watts could provide the key to a completely new category of hardware (neuromorphic computing systems). In order to achieve this, a paradigm shift for computing as a whole is needed, which will see it moving away from current “bit precise” computing models and towards new techniques that exploit the stochastic behavior of simple, reliable, very fast, low-power computing devices embedded in intensely recursive architectures. In this paper we summarize how these objectives will be pursued in the Human Brain Project. PMID:24139655

  19. Global Weather Prediction and High-End Computing at NASA

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert; Yeh, Kao-San

    2003-01-01

    We demonstrate current capabilities of the NASA finite-volume General Circulation Model an high-resolution global weather prediction, and discuss its development path in the foreseeable future. This model can be regarded as a prototype of a future NASA Earth modeling system intended to unify development activities cutting across various disciplines within the NASA Earth Science Enterprise.

  20. Preliminary eddy current modelling for the large angle magnetic suspension test fixture

    NASA Technical Reports Server (NTRS)

    Britcher, Colin

    1994-01-01

    This report presents some recent developments in the mathematical modeling of the Large Angle Magnetic Suspension Test Fixture (LAMSTF) at NASA Langley Research Center. It is shown that these effects are significant, but may be amenable to analysis, modeling and measurement. A theoretical framework is presented, together with a comparison of computed and experimental data.

  1. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security

    DOE PAGES

    Christensen, A. J.; Srinivasan, V.; Hart, J. C.; ...

    2018-03-17

    Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have ledmore » to discoveries in “big data” analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. Lastly, this survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields.« less

  2. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, A. J.; Srinivasan, V.; Hart, J. C.

    Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have ledmore » to discoveries in “big data” analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. Lastly, this survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields.« less

  3. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security.

    PubMed

    Christensen, A J; Srinivasan, Venkatraman; Hart, John C; Marshall-Colon, Amy

    2018-05-01

    Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have led to discoveries in "big data" analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. This survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potash, Peter J.; Bell, Eric B.; Harrison, Joshua J.

    Predictive models for tweet deletion have been a relatively unexplored area of Twitter-related computational research. We first approach the deletion of tweets as a spam detection problem, applying a small set of handcrafted features to improve upon the current state-of-the- art in predicting deleted tweets. Next, we apply our approach to a dataset of deleted tweets that better reflects the current deletion rate. Since tweets are deleted for reasons beyond just the presence of spam, we apply topic modeling and text embeddings in order to capture the semantic content of tweets that can lead to tweet deletion. Our goal ismore » to create an effective model that has a low-dimensional feature space and is also language-independent. A lean model would be computationally advantageous processing high-volumes of Twitter data, which can reach 9,885 tweets per second. Our results show that a small set of spam-related features combined with word topics and character-level text embeddings provide the best f1 when trained with a random forest model. The highest precision of the deleted tweet class is achieved by a modification of paragraph2vec to capture author identity.« less

  5. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security

    PubMed Central

    Christensen, A J; Srinivasan, Venkatraman; Hart, John C; Marshall-Colon, Amy

    2018-01-01

    Abstract Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have led to discoveries in “big data” analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. This survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields. PMID:29562368

  6. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  7. Ionospheric convection inferred from interplanetary magnetic field-dependent Birkeland currents

    NASA Technical Reports Server (NTRS)

    Rasmussen, C. E.; Schunk, R. W.

    1988-01-01

    Computer simulations of ionospheric convection have been performed, combining empirical models of Birkeland currents with a model of ionospheric conductivity in order to investigate IMF-dependent convection characteristics. Birkeland currents representing conditions in the northern polar cap of the negative IMF By component are used. Two possibilities are considered: (1) the morning cell shifting into the polar cap as the IMF turns northward, and this cell and a distorted evening cell providing for sunward flow in the polar cap; and (2) the existence of a three-cell pattern when the IMF is strongly northward.

  8. Ecological systems as computer networks: Long distance sea dispersal as a communication medium between island plant populations.

    PubMed

    Sanaa, Adnen; Ben Abid, Samir; Boulila, Abdennacer; Messaoud, Chokri; Boussaid, Mohamed; Ben Fadhel, Najeh

    2016-06-01

    Ecological systems are known to exchange genetic material through animal species migration and seed dispersal for plants. Isolated plant populations have developed long distance dispersal as a means of propagation which rely on meteorological such as anemochory and hydrochory for coast, island and river bank dwelling species. Long distance dispersal by water, in particular, in the case of water current bound islands, calls for the analogy with computer networks, where each island and nearby mainland site plays the role of a network node, the water currents play the role of a transmission channel, and water borne seeds as data packets. In this paper we explore this analogy to model long distance dispersal of seeds among island and mainland populations, when traversed with water currents, in order to model and predict their future genetic diversity. The case of Pancratium maritimum L. populations in Tunisia is used as a proof of concept, where their genetic diversity is extrapolated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Hybrid reduced order modeling for assembly calculations

    DOE PAGES

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...

    2015-08-14

    While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less

  10. Computational Modeling in Structural Materials Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.

  11. Computational Reconstruction of Pacemaking and Intrinsic Electroresponsiveness in Cerebellar Golgi Cells

    PubMed Central

    Solinas, Sergio; Forti, Lia; Cesana, Elisabetta; Mapelli, Jonathan; De Schutter, Erik; D'Angelo, Egidio

    2007-01-01

    The Golgi cells have been recently shown to beat regularly in vitro (Forti et al., 2006. J. Physiol. 574, 711–729). Four main currents were shown to be involved, namely a persistent sodium current (I Na-p), an h current (I h), an SK-type calcium-dependent potassium current (I K-AHP), and a slow M-like potassium current (I K-slow). These ionic currents could take part, together with others, also to different aspects of neuronal excitability like responses to depolarizing and hyperpolarizing current injection. However, the ionic mechanisms and their interactions remained largely hypothetical. In this work, we have investigated the mechanisms of Golgi cell excitability by developing a computational model. The model predicts that pacemaking is sustained by subthreshold oscillations tightly coupled to spikes. I Na-p and I K-slow emerged as the critical determinants of oscillations. I h also played a role by setting the oscillatory mechanism into the appropriate membrane potential range. I K-AHP, though taking part to the oscillation, appeared primarily involved in regulating the ISI following spikes. The combination with other currents, in particular a resurgent sodium current (I Na-r) and an A-current (I K-A), allowed a precise regulation of response frequency and delay. These results provide a coherent reconstruction of the ionic mechanisms determining Golgi cell intrinsic electroresponsiveness and suggests important implications for cerebellar signal processing, which will be fully developed in a companion paper (Solinas et al., 2008. Front. Neurosci. 2:4). PMID:18946520

  12. EEG-Based Quantification of Cortical Current Density and Dynamic Causal Connectivity Generalized across Subjects Performing BCI-Monitored Cognitive Tasks

    PubMed Central

    Courellis, Hristos; Mullen, Tim; Poizner, Howard; Cauwenberghs, Gert; Iversen, John R.

    2017-01-01

    Quantification of dynamic causal interactions among brain regions constitutes an important component of conducting research and developing applications in experimental and translational neuroscience. Furthermore, cortical networks with dynamic causal connectivity in brain-computer interface (BCI) applications offer a more comprehensive view of brain states implicated in behavior than do individual brain regions. However, models of cortical network dynamics are difficult to generalize across subjects because current electroencephalography (EEG) signal analysis techniques are limited in their ability to reliably localize sources across subjects. We propose an algorithmic and computational framework for identifying cortical networks across subjects in which dynamic causal connectivity is modeled among user-selected cortical regions of interest (ROIs). We demonstrate the strength of the proposed framework using a “reach/saccade to spatial target” cognitive task performed by 10 right-handed individuals. Modeling of causal cortical interactions was accomplished through measurement of cortical activity using (EEG), application of independent component clustering to identify cortical ROIs as network nodes, estimation of cortical current density using cortically constrained low resolution electromagnetic brain tomography (cLORETA), multivariate autoregressive (MVAR) modeling of representative cortical activity signals from each ROI, and quantification of the dynamic causal interaction among the identified ROIs using the Short-time direct Directed Transfer function (SdDTF). The resulting cortical network and the computed causal dynamics among its nodes exhibited physiologically plausible behavior, consistent with past results reported in the literature. This physiological plausibility of the results strengthens the framework's applicability in reliably capturing complex brain functionality, which is required by applications, such as diagnostics and BCI. PMID:28566997

  13. Turbomachinery computational fluid dynamics: asymptotes and paradigm shifts.

    PubMed

    Dawes, W N

    2007-10-15

    This paper reviews the development of computational fluid dynamics (CFD) specifically for turbomachinery simulations and with a particular focus on application to problems with complex geometry. The review is structured by considering this development as a series of paradigm shifts, followed by asymptotes. The original S1-S2 blade-blade-throughflow model is briefly described, followed by the development of two-dimensional then three-dimensional blade-blade analysis. This in turn evolved from inviscid to viscous analysis and then from steady to unsteady flow simulations. This development trajectory led over a surprisingly small number of years to an accepted approach-a 'CFD orthodoxy'. A very important current area of intense interest and activity in turbomachinery simulation is in accounting for real geometry effects, not just in the secondary air and turbine cooling systems but also associated with the primary path. The requirements here are threefold: capturing and representing these geometries in a computer model; making rapid design changes to these complex geometries; and managing the very large associated computational models on PC clusters. Accordingly, the challenges in the application of the current CFD orthodoxy to complex geometries are described in some detail. The main aim of this paper is to argue that the current CFD orthodoxy is on a new asymptote and is not in fact suited for application to complex geometries and that a paradigm shift must be sought. In particular, the new paradigm must be geometry centric and inherently parallel without serial bottlenecks. The main contribution of this paper is to describe such a potential paradigm shift, inspired by the animation industry, based on a fundamental shift in perspective from explicit to implicit geometry and then illustrate this with a number of applications to turbomachinery.

  14. Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei

    DOE PAGES

    Dytrych, T.; Maris, P.; Launey, K. D.; ...

    2016-06-22

    We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU3-selected subspaces. We demonstrate LSU3shell’s strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and significant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis affords memory savings in calculations of states withmore » a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less

  15. Computational Modeling of Cobalt-Based Water Oxidation: Current Status and Future Challenges

    PubMed Central

    Schilling, Mauro; Luber, Sandra

    2018-01-01

    A lot of effort is nowadays put into the development of novel water oxidation catalysts. In this context, mechanistic studies are crucial in order to elucidate the reaction mechanisms governing this complex process, new design paradigms and strategies how to improve the stability and efficiency of those catalysts. This review is focused on recent theoretical mechanistic studies in the field of homogeneous cobalt-based water oxidation catalysts. In the first part, computational methodologies and protocols are summarized and evaluated on the basis of their applicability toward real catalytic or smaller model systems, whereby special emphasis is laid on the choice of an appropriate model system. In the second part, an overview of mechanistic studies is presented, from which conceptual guidelines are drawn on how to approach novel studies of catalysts and how to further develop the field of computational modeling of water oxidation reactions. PMID:29721491

  16. Computational Modeling of Cobalt-based Water Oxidation: Current Status and Future Challenges

    NASA Astrophysics Data System (ADS)

    Schilling, Mauro; Luber, Sandra

    2018-04-01

    A lot of effort is nowadays put into the development of novel water oxidation catalysts. In this context mechanistic studies are crucial in order to elucidate the reaction mechanisms governing this complex process, new design paradigms and strategies how to improve the stability and efficiency of those catalysis. This review is focused on recent theoretical mechanistic studies in the field of homogeneous cobalt-based water oxidation catalysts. In the first part, computational methodologies and protocols are summarized and evaluated on the basis of their applicability towards real catalytic or smaller model systems, whereby special emphasis is laid on the choice of an appropriate model system. In the second part, an overview of mechanistic studies is presented, from which conceptual guidelines are drawn on how to approach novel studies of catalysts and how to further develop the field of computational modeling of water oxidation reactions.

  17. Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dytrych, T.; Maris, Pieter; Launey, K. D.

    2016-06-09

    We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations ofmore » states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less

  18. Fluid–Structure Interaction Analysis of Papillary Muscle Forces Using a Comprehensive Mitral Valve Model with 3D Chordal Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toma, Milan; Jensen, Morten Ø.; Einstein, Daniel R.

    2015-07-17

    Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in-vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves weremore » mounted in an in vitro setup, and structural data for the mitral valve was acquired with *CT. Experimental data from the in-vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed lea et dynamics, and force vectors from the in-vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements are important in validating and adjusting material parameters in computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.« less

  19. Fluid-Structure Interaction Analysis of Papillary Muscle Forces Using a Comprehensive Mitral Valve Model with 3D Chordal Structure.

    PubMed

    Toma, Milan; Jensen, Morten Ø; Einstein, Daniel R; Yoganathan, Ajit P; Cochran, Richard P; Kunzelman, Karyn S

    2016-04-01

    Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves were mounted in an in vitro setup, and structural data for the mitral valve was acquired with [Formula: see text]CT. Experimental data from the in vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed leaflet dynamics, and force vectors from the in vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements enable validating and adjusting material parameters to improve the accuracy of computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.

  20. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  1. Computational model of chromosome aberration yield induced by high- and low-LET radiation exposures.

    PubMed

    Ponomarev, Artem L; George, Kerry; Cucinotta, Francis A

    2012-06-01

    We present a computational model for calculating the yield of radiation-induced chromosomal aberrations in human cells based on a stochastic Monte Carlo approach and calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. A previously developed DNA-fragmentation model for high- and low-LET radiation called the NASARadiationTrackImage model was enhanced to simulate a stochastic process of the formation of chromosomal aberrations from DNA fragments. The current version of the model gives predictions of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G(0)/G(1) cell cycle phase during the first cell division after irradiation. As the model can predict smaller-sized deletions and rings (<3 Mbp) that are below the resolution limits of current cytogenetic analysis techniques, we present predictions of hypothesized small deletions that may be produced as a byproduct of properly repaired DNA double-strand breaks (DSB) by nonhomologous end-joining. Additionally, the model was used to scale chromosomal exchanges in two or three chromosomes that were obtained from whole-chromosome FISH painting analysis techniques to whole-genome equivalent values.

  2. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery.

    PubMed

    Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong

    2017-07-01

    Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Dynamics of aircraft antiskid braking systems. [conducted at the Langley aircraft landing loads and traction facility

    NASA Technical Reports Server (NTRS)

    Tanner, J. A.; Stubbs, S. M.; Dreher, R. C.; Smith, E. G.

    1982-01-01

    A computer study was performed to assess the accuracy of three brake pressure-torque mathematical models. The investigation utilized one main gear wheel, brake, and tire assembly of a McDonnell Douglas DC-9 series 10 airplane. The investigation indicates that the performance of aircraft antiskid braking systems is strongly influenced by tire characteristics, dynamic response of the antiskid control valve, and pressure-torque response of the brake. The computer study employed an average torque error criterion to assess the accuracy of the models. The results indicate that a variable nonlinear spring with hysteresis memory function models the pressure-torque response of the brake more accurately than currently used models.

  4. SIVEH: numerical computing simulation of wireless energy-harvesting sensor nodes.

    PubMed

    Sanchez, Antonio; Blanc, Sara; Climent, Salvador; Yuste, Pedro; Ors, Rafael

    2013-09-04

    The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I-V for EH), based on I-V hardware tracking. I-V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time-days, weeks, months or years-using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach.

  5. SIVEH: Numerical Computing Simulation of Wireless Energy-Harvesting Sensor Nodes

    PubMed Central

    Sanchez, Antonio; Blanc, Sara; Climent, Salvador; Yuste, Pedro; Ors, Rafael

    2013-01-01

    The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I–V for EH), based on I–V hardware tracking. I–V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time—days, weeks, months or years—using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach. PMID:24008287

  6. Laboratory modeling and analysis of aircraft-lightning interactions

    NASA Technical Reports Server (NTRS)

    Turner, C. D.; Trost, T. F.

    1982-01-01

    Modeling studies of the interaction of a delta wing aircraft with direct lightning strikes were carried out using an approximate scale model of an F-106B. The model, which is three feet in length, is subjected to direct injection of fast current pulses supplied by wires, which simulate the lightning channel and are attached at various locations on the model. Measurements are made of the resulting transient electromagnetic fields using time derivative sensors. The sensor outputs are sampled and digitized by computer. The noise level is reduced by averaging the sensor output from ten input pulses at each sample time. Computer analysis of the measured fields includes Fourier transformation and the computation of transfer functions for the model. Prony analysis is also used to determine the natural frequencies of the model. Comparisons of model natural frequencies extracted by Prony analysis with those for in flight direct strike data usually show lower damping in the in flight case. This is indicative of either a lightning channel with a higher impedance than the wires on the model, only one attachment point, or short streamers instead of a long channel.

  7. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    NASA Astrophysics Data System (ADS)

    McEwan, Alistair; van Schaik, André

    2003-12-01

    The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  8. Prediction of the structure of fuel sprays in gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Shuen, J. S.

    1985-01-01

    The structure of fuel sprays in a combustion chamber is theoretically investigated using computer models of current interest. Three representative spray models are considered: (1) a locally homogeneous flow (LHF) model, which assumes infinitely fast interphase transport rates; (2) a deterministic separated flow (DSF) model, which considers finite rates of interphase transport but ignores effects of droplet/turbulence interactions; and (3) a stochastic separated flow (SSF) model, which considers droplet/turbulence interactions using random sampling for turbulence properties in conjunction with random-walk computations for droplet motion and transport. Two flow conditions are studied to investigate the influence of swirl on droplet life histories and the effects of droplet/turbulence interactions on flow properties. Comparison of computed results with the experimental data show that general features of the flow structure can be predicted with reasonable accuracy using the two separated flow models. In contrast, the LHF model overpredicts the rate of development of the flow. While the SSF model provides better agreement with measurements than the DSF model, definitive evaluation of the significance of droplet/turbulence interaction is not achieved due to uncertainties in the spray initial conditions.

  9. A 2D finite element study on the role of material properties on eddy current losses in soft magnetic composites

    NASA Astrophysics Data System (ADS)

    Ren, Xiaotao; Corcolle, Romain; Daniel, Laurent

    2016-02-01

    The use of soft magnetic composites (SMCs) in electrical engineering applications is growing. SMCs provide an effective alternative to laminated steels because they exhibit a high permeability with low eddy current losses. Losses are a critical feature in the design of electrical machines, and it is necessary to evaluate the role of microstructure and constitutive properties of SMCs during the predesign stage. In this paper we propose a simplified finite element approach to compute eddy current losses in these materials. The computations allow to quantify the role of exciting source and material properties on eddy current losses. This analysis can later be used in the development of homogenization models for SMC. Contribution to the topical issue "Numelec 2015 - Elected submissions", edited by Adel Razek

  10. Computer modeling of current collection by the CHARGE-2 mother payload

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Lilley, J. R., Jr.; Katz, I.; Neubert, T.; Myers, Neil B.

    1990-01-01

    The three-dimensional computer codes NASCAP/LEO and POLAR have been used to calculate current collection by the mother payload of the CHARGE-2 rocket under conditions of positive and negative potential up to several hundred volts. For negative bias (ion collection), the calculations lie about 25 percent above the data, indicating that the ions were less dense, colder, or heavier than the input parameters. For positive bias (electron collection), NASCAP/LEO and POLAR calculations show similar agreement with the measurements at the highest altitudes. This agreement indicates that the current is classically magnetically limited, even during electron beam emission. However, the calculated values fall well below the data at lower altitudes. It is suggested that beam-plasma-neutral interactions are responsible for the high values of collected current at altitudes below 240 km.

  11. NASA's Use of Human Behavior Models for Concept Development and Evaluation

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2012-01-01

    Overview of NASA's use of computational approaches and methods to support research goals, of human performance models, with a focus on examples of the methods used in Code TH and TI at NASA Ames, followed by an in depth review of MIDAS' current FAA work.

  12. Hierarchical Bayesian Models of Subtask Learning

    ERIC Educational Resources Information Center

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  13. Current Density and Continuity in Discretized Models

    ERIC Educational Resources Information Center

    Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard

    2010-01-01

    Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…

  14. Augmenting Literacy: The Role of Expertise in Digital Writing

    ERIC Educational Resources Information Center

    Van Ittersum, Derek

    2011-01-01

    This essay presents a model of reflective use of writing technologies, one that provides a means of more fully exploiting the possibilities of these tools for transforming writing activity. Derived from the work of computer designer Douglas Engelbart, the "bootstrapping" model of reflective use extends current arguments in the field…

  15. Modelling Digital Thunder

    ERIC Educational Resources Information Center

    Blanco, Francesco; La Rocca, Paola; Petta, Catia; Riggi, Francesco

    2009-01-01

    An educational model simulation of the sound produced by lightning in the sky has been employed to demonstrate realistic signatures of thunder and its connection to the particular structure of the lightning channel. Algorithms used in the past have been revisited and implemented, making use of current computer techniques. The basic properties of…

  16. Predicting the regeneration of Appalachian hardwoods: adapting the REGEN model for the Appalachian Plateau

    Treesearch

    Lance A. Vickers; Thomas R. Fox; David L. Loftis; David A. Boucugnani

    2013-01-01

    The difficulty of achieving reliable oak (Quercus spp.) regeneration is well documented. Application of silvicultural techniques to facilitate oak regeneration largely depends on current regeneration potential. A computer model to assess regeneration potential based on existing advanced reproduction in Appalachian hardwoods was developed by David...

  17. Assessment of Spacecraft Systems Integration Using the Electric Propulsion Interactions Code (EPIC)

    NASA Technical Reports Server (NTRS)

    Mikellides, Ioannis G.; Kuharski, Robert A.; Mandell, Myron J.; Gardner, Barbara M.; Kauffman, William J. (Technical Monitor)

    2002-01-01

    SAIC is currently developing the Electric Propulsion Interactions Code 'EPIC', an interactive computer tool that allows the construction of a 3-D spacecraft model, and the assessment of interactions between its subsystems and the plume from an electric thruster. EPIC unites different computer tools to address the complexity associated with the interaction processes. This paper describes the overall architecture and capability of EPIC including the physics and algorithms that comprise its various components. Results from selected modeling efforts of different spacecraft-thruster systems are also presented.

  18. Mesoscopic modelling and simulation of soft matter.

    PubMed

    Schiller, Ulf D; Krüger, Timm; Henrich, Oliver

    2017-12-20

    The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.

  19. Markov Jump-Linear Performance Models for Recoverable Flight Control Computers

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.

  20. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this study establish the feasibility and importance of including influential point detection diagnostics as a standard tool in hydrological model calibration. They provide the hydrologist with important information on whether model calibration is susceptible to a small number of highly influent data points. This enables the hydrologist to make a more informed decision of whether to (1) remove/retain the calibration data; (2) adjust the calibration strategy and/or hydrological model to reduce the susceptibility of model predictions to a small number of influential observations.

  1. Particle based plasma simulation for an ion engine discharge chamber

    NASA Astrophysics Data System (ADS)

    Mahalingam, Sudhakar

    Design of the next generation of ion engines can benefit from detailed computer simulations of the plasma in the discharge chamber. In this work a complete particle based approach has been taken to model the discharge chamber plasma. This is the first time that simplifying continuum assumptions on the particle motion have not been made in a discharge chamber model. Because of the long mean free paths of the particles in the discharge chamber continuum models are questionable. The PIC-MCC model developed in this work tracks following particles: neutrals, singly charged ions, doubly charged ions, secondary electrons, and primary electrons. The trajectories of these particles are determined using the Newton-Lorentz's equation of motion including the effects of magnetic and electric fields. Particle collisions are determined using an MCC statistical technique. A large number of collision processes and particle wall interactions are included in the model. The magnetic fields produced by the permanent magnets are determined using Maxwell's equations. The electric fields are determined using an approximate input electric field coupled with a dynamic determination of the electric fields caused by the charged particles. In this work inclusion of the dynamic electric field calculation is made possible by using an inflated plasma permittivity value in the Poisson solver. This allows dynamic electric field calculation with minimal computational requirements in terms of both computer memory and run time. In addition, a number of other numerical procedures such as parallel processing have been implemented to shorten the computational time. The primary results are those modeling the discharge chamber of NASA's NSTAR ion engine at its full operating power. Convergence of numerical results such as total number of particles inside the discharge chamber, average energy of the plasma particles, discharge current, beam current and beam efficiency are obtained. Steady state results for the particle number density distributions and particle loss rates to the walls are presented. Comparisons of numerical results with experimental measurements such as currents and the particle number density distributions are made. Results from a parametric study and from an alternative magnetic field design are also given.

  2. Evaluation of the discrete vortex wake cross flow model using vector computers. Part 1: Theory and application

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.

  3. Space mapping method for the design of passive shields

    NASA Astrophysics Data System (ADS)

    Sergeant, Peter; Dupré, Luc; Melkebeek, Jan

    2006-04-01

    The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.

  4. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  5. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    PubMed

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  6. Human Modeling for Ground Processing Human Factors Engineering Analysis

    NASA Technical Reports Server (NTRS)

    Stambolian, Damon B.; Lawrence, Brad A.; Stelges, Katrine S.; Steady, Marie-Jeanne O.; Ridgwell, Lora C.; Mills, Robert E.; Henderson, Gena; Tran, Donald; Barth, Tim

    2011-01-01

    There have been many advancements and accomplishments over the last few years using human modeling for human factors engineering analysis for design of spacecraft. The key methods used for this are motion capture and computer generated human models. The focus of this paper is to explain the human modeling currently used at Kennedy Space Center (KSC), and to explain the future plans for human modeling for future spacecraft designs

  7. Numerical simulation of the flow about the F-18 HARV at high angle of attack

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.

    1994-01-01

    This report summarizes research done over the past two years as part of NASA Grant NCC 2-729. This research has been aimed at validating numerical methods for computing the flow about the complete F-18 HARV at alpha = 30 deg and alpha = 45 deg. At 30 deg angle of attack, the flow about the F-18 is dominated by the formation, and subsequent breakdown, of strong vortices over the wing leading-edge extensions (LEX). As the angle of attack is increased to alpha = 45 deg, the fuselage forebody of the F-18 contains significant laminar and transitional regions which are not present at alpha = 30 deg. Further, the flow over the LEX at alpha = 45 deg is dominated by an unsteady shedding in time, rather than strong coherent vortices. This complex physics, combined with the complex geometry of a full aircraft configuration, provides a challenge for current computational fluid dynamics (CFD) techniques. The following sections present the numerical method and grid generation scheme that was used, a review of prior research done to numerically model the F-18 HARV, and a discussion of the current research. The current research is broken into two main topics: the effect of engine-inlet mass-flow rate on the F-18 vortex breakdown position, and the results using a refined F-18 computational model to compute the flow at alpha = 30 deg and alpha = 45 deg.

  8. Numerical simulation of the flow about the F-18 HARV at high angle of attack

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.

    1995-01-01

    This research has been aimed at validating numerical methods for computing the flow about the complete F-18 HARV at alpha = 30 deg and alpha = 45 deg. At 30 deg angle of attack, the flow about the F-18 is dominated by the formation, and subsequent breakdown, of strong vortices over the wing leading-edge extensions (LEX). As the angle of attack is increased to alpha = 45 deg, the fuselage forebody of the F-18 contains significant laminar and transitional regions which are not present at alpha = 30 deg. Further, the flow over the LEX at alpha = 45 deg is dominated by an unsteady shedding in time, rather than strong coherent vortices. This complex physics, combined with the complex geometry of a full-aircraft configuration, provides a challenge for current computational fluid dynamics (CFD) techniques. The following sections present the numerical method and grid generation scheme that was used, a review of prior research done to numerically model the F-18 HARV, and a discussion of the current research. The current research is broken into three main topics; the effect of engine-inlet mass-flow rate on the F-18 vortex breakdown position, the results using a refined F-18 computational model to compute the flow at alpha = 30 deg and alpha = 45 deg, and research done using the simplified geometry of an ogive-cylinder configuration to investigate the physics of unsteady shear-layer shedding. The last section briefly summarizes the discussion.

  9. A feasibility study on porting the community land model onto accelerators using OpenACC

    DOE PAGES

    Wang, Dali; Wu, Wei; Winkler, Frank; ...

    2014-01-01

    As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC appears as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflowmore » procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads. On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. In conclusion, we believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.« less

  10. Fast Computation on the Modern Battlefield

    DTIC Science & Technology

    2015-04-01

    the performance of offloading systems in current and future scenarios. The modularity of this model allows system designers to replace model...goals were simplicity and modularity . We wanted the model to not necessarily answer every question for every scenario, but rather expose easy to...acquisitions for future systems. Again, because of the modularity of the model, it is possible for designers to substitute the most accurate value for

  11. Use of Displacement Damage Dose in an Engineering Model of GaAs Solar Cell Radiation Damage

    NASA Technical Reports Server (NTRS)

    Morton, T. L.; Chock, R.; Long, K. J.; Bailey, S.; Messenger, S. R.; Walters, R. J.; Summers, G. P.

    2005-01-01

    Current methods for calculating damage to solar cells are well documented in the GaAs Solar Cell Radiation Handbook (JPL 96-9). An alternative, the displacement damage dose (D(sub d)) method, has been developed by Summers, et al. This method is currently being implemented in the SAVANT computer program.

  12. Human performance cognitive-behavioral modeling: a benefit for occupational safety.

    PubMed

    Gore, Brian F

    2002-01-01

    Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.

  13. BACT Simulation User Guide (Version 7.0)

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.

  14. Human performance cognitive-behavioral modeling: a benefit for occupational safety

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2002-01-01

    Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.

  15. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Operational forecasting with the subgrid technique on the Elbe Estuary

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa

    2017-04-01

    Modern remote sensing technologies can deliver very detailed land surface height data that should be considered for more accurate simulations. In that case, and even if some compromise is made with regard to grid resolution of an unstructured grid, simulations still will require large grids which can be computationally very demanding. The subgrid technique, first published by Casulli (2009), is based on the idea of making use of the available detailed subgrid bathymetric information while performing computations on relatively coarse grids permitting large time steps. Consequently, accuracy and efficiency are drastically enhanced if compared to the classical linear method, where the underlying bathymetry is solely discretized by the computational grid. The algorithm guarantees rigorous mass conservation and nonnegative water depths for any time step size. Computational grid-cells are permitted to be wet, partially wet or dry and no drying threshold is needed. The subgrid technique is used in an operational forecast model for water level, current velocity, salinity and temperature of the Elbe estuary in Germany. Comparison is performed with the comparatively highly resolved classical unstructured grid model UnTRIM. The daily meteorological forcing data are delivered by the German Weather Service (DWD) using the ICON-EU model. Open boundary data are delivered by the coastal model BSHcmod of the German Federal Maritime and Hydrographic Agency (BSH). Comparison of predicted water levels between classical and subgrid model shows a very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out within less than 10 minutes on standard PC-like hardware. The model is capable of permanently delivering highly resolved temporal and spatial information on water level, current velocity, salinity and temperature for the whole estuary. The model offers also the possibility to recalculate any previous situation. This can be helpful to figure out for instance the context in which a certain event occurred like an accident. In addition to measurement, the model can be used to improve navigability by adjusting the tidal transit-schedule for container vessels that are depending on the tide to approach or leave the port of Hamburg.

  17. a Physical Parameterization of Snow Albedo for Use in Climate Models.

    NASA Astrophysics Data System (ADS)

    Marshall, Susan Elaine

    The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.

  18. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  19. Parallel Computing for Brain Simulation.

    PubMed

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. Classroom Tips.

    ERIC Educational Resources Information Center

    Crain, Cheryl

    1994-01-01

    Presents six teaching ideas from teachers in Foothills Schools, Alberta, Canada. Includes suggested activities on local government, computer uses in social studies, Canadian history, current events, and world studies. Provides models of a passport application, passports, and visas. (CFR)

Top