Sample records for computer simulations illustrate

  1. Computer Simulation of the Neuronal Action Potential.

    ERIC Educational Resources Information Center

    Solomon, Paul R.; And Others

    1988-01-01

    A series of computer simulations of the neuronal resting and action potentials are described. Discusses the use of simulations to overcome the difficulties of traditional instruction, such as blackboard illustration, which can only illustrate these events at one point in time. Describes systems requirements necessary to run the simulations.…

  2. Monte Carlo Computer Simulation of a Rainbow.

    ERIC Educational Resources Information Center

    Olson, Donald; And Others

    1990-01-01

    Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)

  3. Chaos in the Classroom.

    ERIC Educational Resources Information Center

    Jackett, Dwane

    1990-01-01

    Described is a science activity which illustrates the principle of uncertainty using a computer simulation of bacterial reproduction. Procedures and results are discussed. Several illustrations of results are provided. The availability of a computer program is noted. (CW)

  4. A Primer on Simulation and Gaming.

    ERIC Educational Resources Information Center

    Barton, Richard F.

    In a primer intended for the administrative professions, for the behavioral sciences, and for education, simulation and its various aspects are defined, illustrated, and explained. Man-model simulation, man-computer simulation, all-computer simulation, and analysis are discussed as techniques for studying object systems (parts of the "real…

  5. Computers in Biological Education: Simulation Approaches. Genetics and Evolution. CAL Research Group Technical Report No. 13.

    ERIC Educational Resources Information Center

    Murphy, P. J.

    Three examples of genetics and evolution simulation concerning Mendelian inheritance, genetic mapping, and natural selection are used to illustrate the use of simulations in modeling scientific/natural processes. First described is the HERED series, which illustrates such phenomena as incomplete dominance, multiple alleles, lethal alleles,…

  6. Computer Simulations as an Integral Part of Intermediate Macroeconomics.

    ERIC Educational Resources Information Center

    Millerd, Frank W.; Robertson, Alastair R.

    1987-01-01

    Describes the development of two interactive computer simulations which were fully integrated with other course materials. The simulations illustrate the effects of various real and monetary "demand shocks" on aggregate income, interest rates, and components of spending and economic output. Includes an evaluation of the simulations'…

  7. Simulation Insights Using "R"

    ERIC Educational Resources Information Center

    Kostadinov, Boyan

    2013-01-01

    This article attempts to introduce the reader to computational thinking and solving problems involving randomness. The main technique being employed is the Monte Carlo method, using the freely available software "R for Statistical Computing." The author illustrates the computer simulation approach by focusing on several problems of…

  8. Teaching by Simulation with Personal Computers.

    ERIC Educational Resources Information Center

    Randall, James E.

    1978-01-01

    Describes the use of a small digital computer to simulate a peripheral nerve demonstration in which the action potential responses to pairs of stimuli are used to illustrate the properties of excitable membranes. (Author/MA)

  9. Evaluation of Visual Computer Simulator for Computer Architecture Education

    ERIC Educational Resources Information Center

    Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio

    2013-01-01

    This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…

  10. Computing the apparent centroid of radar targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.E.

    1996-12-31

    A high-frequency multibounce radar scattering code was used as a simulation platform for demonstrating an algorithm to compute the ARC of specific radar targets. To illustrate this simulation process, several targets models were used. Simulation results for a sphere model were used to determine the errors of approximation associated with the simulation; verifying the process. The severity of glint induced tracking errors was also illustrated using a model of an F-15 aircraft. It was shown, in a deterministic manner, that the ARC of a target can fall well outside its physical extent. Finally, the apparent radar centroid simulation based onmore » a ray casting procedure is well suited for use on most massively parallel computing platforms and could lead to the development of a near real-time radar tracking simulation for applications such as endgame fuzing, survivability, and vulnerability analyses using specific radar targets and fuze algorithms.« less

  11. Simulation and control of a 20 kHz spacecraft power system

    NASA Technical Reports Server (NTRS)

    Wasynczuk, O.; Krause, P. C.

    1988-01-01

    A detailed computer representation of four Mapham inverters connected in a series, parallel arrangement has been implemented. System performance is illustrated by computer traces for the four Mapham inverters connected to a Litz cable with parallel resistance and dc receiver loads at the receiving end of the transmission cable. Methods of voltage control and load sharing between the inverters are demonstrated. Also, the detailed computer representation is used to design and to demonstrate the advantages of a feed-forward voltage control strategy. It is illustrated that with a computer simulation of this type, the performance and control of spacecraft power systems may be investigated with relative ease and facility.

  12. COED Transactions, Vol. X, No. 10, October 1978. Simulation of a Sampled-Data System on a Hybrid Computer.

    ERIC Educational Resources Information Center

    Mitchell, Eugene E., Ed.

    The simulation of a sampled-data system is described that uses a full parallel hybrid computer. The sampled data system simulated illustrates the proportional-integral-derivative (PID) discrete control of a continuous second-order process representing a stirred-tank. The stirred-tank is simulated using continuous analog components, while PID…

  13. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.

  14. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  15. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  16. Particle simulation of plasmas and stellar systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, T.; Clark, A.; Craddock, G.G.

    1985-04-01

    A computational technique is introduced which allows the student and researcher an opportunity to observe the physical behavior of a class of many-body systems. A series of examples is offered which illustrates the diversity of problems that may be studied using particle simulation. These simulations were in fact assigned as homework in a course on computational physics.

  17. Computer Simulation as an Aid for Management of an Information System.

    ERIC Educational Resources Information Center

    Simmonds, W. H.; And Others

    The aim of this study was to develop methods, based upon computer simulation, of designing information systems and illustrate the use of these methods by application to an information service. The method developed is based upon Monte Carlo and discrete event simulation techniques and is described in an earlier report - Sira report R412 Organizing…

  18. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  19. Paper simulation techniques in user requirements analysis for interactive computer systems

    NASA Technical Reports Server (NTRS)

    Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.

    1979-01-01

    This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task

  20. Arrhythmic risk biomarkers for the assessment of drug cardiotoxicity: from experiments to computer simulations

    PubMed Central

    Corrias, A.; Jie, X.; Romero, L.; Bishop, M. J.; Bernabeu, M.; Pueyo, E.; Rodriguez, B.

    2010-01-01

    In this paper, we illustrate how advanced computational modelling and simulation can be used to investigate drug-induced effects on cardiac electrophysiology and on specific biomarkers of pro-arrhythmic risk. To do so, we first perform a thorough literature review of proposed arrhythmic risk biomarkers from the ionic to the electrocardiogram levels. The review highlights the variety of proposed biomarkers, the complexity of the mechanisms of drug-induced pro-arrhythmia and the existence of significant animal species differences in drug-induced effects on cardiac electrophysiology. Predicting drug-induced pro-arrhythmic risk solely using experiments is challenging both preclinically and clinically, as attested by the rise in the cost of releasing new compounds to the market. Computational modelling and simulation has significantly contributed to the understanding of cardiac electrophysiology and arrhythmias over the last 40 years. In the second part of this paper, we illustrate how state-of-the-art open source computational modelling and simulation tools can be used to simulate multi-scale effects of drug-induced ion channel block in ventricular electrophysiology at the cellular, tissue and whole ventricular levels for different animal species. We believe that the use of computational modelling and simulation in combination with experimental techniques could be a powerful tool for the assessment of drug safety pharmacology. PMID:20478918

  1. Performance evaluation using SYSTID time domain simulation. [computer-aid design and analysis for communication systems

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Ziemer, R. E.; Fashano, M. J.

    1975-01-01

    This paper reviews the SYSTID technique for performance evaluation of communication systems using time-domain computer simulation. An example program illustrates the language. The inclusion of both Gaussian and impulse noise models make accurate simulation possible in a wide variety of environments. A very flexible postprocessor makes possible accurate and efficient performance evaluation.

  2. The SIMs Meet ESL Incorporating Authentic Computer Simulation Games into the Language Classroom

    ERIC Educational Resources Information Center

    Miller, Megan; Hegelheimer, Volker

    2006-01-01

    Despite their motivational appeal to learners, innovative and technologically advanced computer simulation games targeting native English speakers frequently remain beyond the competence of ESL learners as independent didactic tools. Guided by Chapelle's (2001) criteria for determining CALL task appropriateness, this paper illustrates how the…

  3. Designing and Introducing Ethical Dilemmas into Computer-Based Business Simulations

    ERIC Educational Resources Information Center

    Schumann, Paul L.; Scott, Timothy W.; Anderson, Philip H.

    2006-01-01

    This article makes two contributions to the teaching of business ethics literature. First, it describes the steps involved in developing effective ethical dilemmas to incorporate into a computer-based business simulation. Second, it illustrates these steps by presenting two ethical dilemmas that an instructor can incorporate into any business…

  4. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  5. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  6. A real-time digital computer program for the simulation of a single rotor helicopter

    NASA Technical Reports Server (NTRS)

    Houck, J. A.; Gibson, L. H.; Steinmetz, G. G.

    1974-01-01

    A computer program was developed for the study of a single-rotor helicopter on the Langley Research Center real-time digital simulation system. Descriptions of helicopter equations and data, program subroutines (including flow charts and listings), real-time simulation system routines, and program operation are included. Program usage is illustrated by standard check cases and a representative flight case.

  7. Application of the TEMPEST computer code for simulating hydrogen distribution in model containment structures. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trent, D.S.; Eyler, L.L.

    In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.

  8. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  9. 'Towers in the Tempest' Computer Animation Submission

    NASA Technical Reports Server (NTRS)

    Shirah, Greg

    2008-01-01

    The following describes a computer animation that has been submitted to the ACM/SIGGRAPH 2008 computer graphics conference: 'Towers in the Tempest' clearly communicates recent scientific research into how hurricanes intensify. This intensification can be caused by a phenomenon called a 'hot tower.' For the first time, research meteorologists have run complex atmospheric simulations at a very fine temporal resolution of 3 minutes. Combining this simulation data with satellite observations enables detailed study of 'hot towers.' The science of 'hot towers' is described using: satellite observation data, conceptual illustrations, and a volumetric atmospheric simulation data. The movie starts by showing a 'hot tower' observed by NASA's Tropical Rainfall Measuring Mission (TRMM) spacecraft's three dimensional precipitation radar data of Hurricane Bonnie. Next, the dynamics of a hurricane and the formation of 'hot towers' are briefly explained using conceptual illustrations. Finally, volumetric cloud, wind, and vorticity data from a supercomputer simulation of Hurricane Bonnie are shown using volume techniques such as ray marching.

  10. Generalized dynamic engine simulation techniques for the digital computer

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1974-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.

  11. Generalized dynamic engine simulation techniques for the digital computer

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1974-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.

  12. Generalized dynamic engine simulation techniques for the digital computers

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1975-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar digital programs on future engine simulation philosophy is also discussed.

  13. Simulation Applications at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Inouye, M.

    1984-01-01

    Aeronautical applications of simulation technology at Ames Research Center are described. The largest wind tunnel in the world is used to determine the flow field and aerodynamic characteristics of various aircraft, helicopter, and missile configurations. Large computers are used to obtain similar results through numerical solutions of the governing equations. Capabilities are illustrated by computer simulations of turbulence, aileron buzz, and an exhaust jet. Flight simulators are used to assess the handling qualities of advanced aircraft, particularly during takeoff and landing.

  14. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  15. User's manual for a computer program for simulating intensively managed allowable cut.

    Treesearch

    Robert W. Sassaman; Ed Holt; Karl Bergsvik

    1972-01-01

    Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....

  16. Implementation of the EM Algorithm in the Estimation of Item Parameters: The BILOG Computer Program.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Bock, R. Darrell

    This paper reviews the basic elements of the EM approach to estimating item parameters and illustrates its use with one simulated and one real data set. In order to illustrate the use of the BILOG computer program, runs for 1-, 2-, and 3-parameter models are presented for the two sets of data. First is a set of responses from 1,000 persons to five…

  17. Applications of a Pharmacokinetic Simulation Program in Pharmacy Courses.

    ERIC Educational Resources Information Center

    Ingram, D.; And Others

    1979-01-01

    Presents a multicompartment model which illustrates aspects of drug absorption, distribution, and elimination in the human body for a course in pharmacokinetics. The course work consists of the interpretation of computer generated simulated data. (Author/CMV)

  18. Light reflection models for computer graphics.

    PubMed

    Greenberg, D P

    1989-04-14

    During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.

  19. Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.

    PubMed

    Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C

    2006-02-28

    We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.

  20. Six-degree-of-freedom missile simulation using the ADI AD 100 digital computer and ADSIM simulation language

    NASA Technical Reports Server (NTRS)

    Zwaanenburg, Koos

    1989-01-01

    The use of an AD 100 computer and the ADSIM language in the six-degree-of-freedom digital simulation of an air-to-ground missile is illustrated. The missile is launched from a moving platform, typically a helicopter, and is capable of striking a mobile target up to 10 kilometers away. The missile could be any tactical missile. The performance numbers of the AD 100 show that it is possible to implement a high performance missile model in a real-time simulation without the problems associated with an implementation on a general purpose computer using FORTRAN.

  1. Structural Durability of Damaged Metallic Panel Repaired with Composite Patches

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon; Chamis, Christos C.

    1997-01-01

    Structural durability/damage tolerance characteristics of an aluminum tension specimen possessing a short crack and repaired by applying a fiber composite surface patch is investigated via computational simulation. The composite patch is made of graphite/epoxy plies with various layups. An integrated computer code that accounts for all possible failure modes is utilized for the simulation of combined fiber-composite/aluminum structural degradation under loading. Damage initiation, growth, accumulation, and propagation to structural fracture are included in the simulation. Results show the structural degradation stages due to tensile loading and illustrate the use of computational simulation for the investigation of a composite patch repaired cracked metallic panel.

  2. Numerical Stimulation of Multicomponent Chromatography Using Spreadsheets.

    ERIC Educational Resources Information Center

    Frey, Douglas D.

    1990-01-01

    Illustrated is the use of spreadsheet programs for implementing finite difference numerical simulations of chromatography as an instructional tool in a separations course. Discussed are differential equations, discretization and integration, spreadsheet development, computer requirements, and typical simulation results. (CW)

  3. FARSITE: Fire Area Simulator-model development and evaluation

    Treesearch

    Mark A. Finney

    1998-01-01

    A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.

  4. Recording Images Observed Using Ripple Tanks

    ERIC Educational Resources Information Center

    Auty, Geoff

    2018-01-01

    Diagrams and photographs (or computer simulations) should not replace effective observations of the wave properties that can be illustrated using a ripple tank, but they can provide support when discussing and revising what has been observed. This article explains and illustrates a route towards successful photography, which is much easier with…

  5. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  6. Pharmacology Experiments on the Computer.

    ERIC Educational Resources Information Center

    Keller, Daniel

    1990-01-01

    A computer program that replaces a set of pharmacology and physiology laboratory experiments on live animals or isolated organs is described and illustrated. Five experiments are simulated: dose-effect relationships on smooth muscle, blood pressure and catecholamines, neuromuscular signal transmission, acetylcholine and the circulation, and…

  7. Computational structural mechanics for engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1989-01-01

    The computational structural mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures. It is structured to mainly supplement, complement, and whenever possible replace, costly experimental efforts which are unavoidable during engineering research and development programs. Specific objectives include: investigate unique advantages of parallel and multiprocesses for: reformulating/solving structural mechanics and formulating/solving multidisciplinary mechanics and develop integrated structural system computational simulators for: predicting structural performances, evaluating newly developed methods, and for identifying and prioritizing improved/missing methods needed. Herein the CSM program is summarized with emphasis on the Engine Structures Computational Simulator (ESCS). Typical results obtained using ESCS are described to illustrate its versatility.

  8. A computer simulator for development of engineering system design methodologies

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Sobieszczanski-Sobieski, J.

    1987-01-01

    A computer program designed to simulate and improve engineering system design methodology is described. The simulator mimics the qualitative behavior and data couplings occurring among the subsystems of a complex engineering system. It eliminates the engineering analyses in the subsystems by replacing them with judiciously chosen analytical functions. With the cost of analysis eliminated, the simulator is used for experimentation with a large variety of candidate algorithms for multilevel design optimization to choose the best ones for the actual application. Thus, the simulator serves as a development tool for multilevel design optimization strategy. The simulator concept, implementation, and status are described and illustrated with examples.

  9. Simulation of Coast Guard Vessel Traffic Service Operations by Model and Experiment

    DOT National Transportation Integrated Search

    1980-09-01

    A technique for computer simulation of operations of U.S. Coast Guard Vessel Traffic Services is described and verified with data obtained in four field studies. Uses of the Technique are discussed and illustrated. A field experiment is described in ...

  10. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    PubMed

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.

  12. A simulation study of hardwood rootstock populations in young loblolly pine plantations

    Treesearch

    David R. Weise; Glenn R. Glover

    1988-01-01

    A computer program to simulate spatial distribution of hardwood rootstock populations is presented. Nineteen 3 to 6 yearold loblolly pine (Pinus taeda L.) plantations in Alabama and Georgia were measured to provide information for the simulator. Spatial pattern, expressed as Pielou's nonrandomness index (PNI), ranged from 0.47 to 2.45. Scatterplots illustrated no...

  13. Thermalized Drude Oscillators with the LAMMPS Molecular Dynamics Simulator.

    PubMed

    Dequidt, Alain; Devémy, Julien; Pádua, Agílio A H

    2016-01-25

    LAMMPS is a very customizable molecular dynamics simulation software, which can be used to simulate a large diversity of systems. We introduce a new package for simulation of polarizable systems with LAMMPS using thermalized Drude oscillators. The implemented functionalities are described and are illustrated by examples. The implementation was validated by comparing simulation results with published data and using a reference software. Computational performance is also analyzed.

  14. User interfaces for computational science: A domain specific language for OOMMF embedded in Python

    NASA Astrophysics Data System (ADS)

    Beg, Marijan; Pepper, Ryan A.; Fangohr, Hans

    2017-05-01

    Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i) the re-compilation of source code, (ii) the use of configuration files, (iii) the graphical user interface, and (iv) embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF). We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.

  15. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  16. Insight Center | Computational Science | NREL

    Science.gov Websites

    effectively convey information and illustrate research findings to stakeholders and visitors. The -turbine array simulations. Observational data span from the nanostructures of biomass pretreatments to the

  17. Computer Assisted Instruction in Economics: An Approach for Illustrating General Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Gillespie, Robert W.

    A market exchange simulation utilizing the PLATO computer-assisted instructional system at the University of Illinois has been designed to teach students the principles of a general equilibrium system. It serves a laboratory function which supplements traditional instruction by stimulating students' interests and providing them with illustrations…

  18. Use of Computer Simulation in Designing and Evaluating a Proposed Rough Mill for Furniture Interior Parts

    Treesearch

    Philip A. Araman

    1977-01-01

    The design of a rough mill for the production of interior furniture parts is used to illustrate a simulation technique for analyzing and evaluating established and proposed sequential production systems. Distributions representing the real-world random characteristics of lumber, equipment feed speeds and delay times are programmed into the simulation. An example is...

  19. Case studies of simulation models of recreation use

    Treesearch

    David N. Cole

    2005-01-01

    Computer simulation models can be usefully applied to many different outdoor recreation situations. Model outputs can also be used for a wide variety of planning and management purposes. The intent of this chapter is to use a collection of 12 case studies to illustrate how simulation models have been used in a wide range of recreation situations and for diverse...

  20. A users evaluation of SAMIS. [Solar Array Manufacturing Industry Simulation

    NASA Technical Reports Server (NTRS)

    Grenon, L. A.; Coleman, M. G.

    1981-01-01

    SAMIS, the Solar Array Manufacturing Industry Simulation computer program was developed by Jet Propulsion Laboratories (JPL) to provide a method whereby manufacturers or potential manufacturers of photovoltaics could simulate a solar industry using their own particular approach. This paper analyzes the usefulness of SAMIS to a growing photovoltaic industry and clearly illustrates its limitations as viewed by an industrial user.

  1. Electromagnetic Showers at High Energy

    ERIC Educational Resources Information Center

    Loos, J. S.; Dawson, S. L.

    1978-01-01

    Some of the properties of electromagnetic showers observed in an experimental study are illustrated. Experimental data and results from quantum electrodynamics are discussed. Data and theory are compared using computer simulation. (BB)

  2. A computer graphics based model for scattering from objects of arbitrary shapes in the optical region

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.

    1991-01-01

    A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.

  3. Optimized Materials From First Principles Simulations: Are We There Yet?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galli, G; Gygi, F

    2005-07-26

    In the past thirty years, the use of scientific computing has become pervasive in all disciplines: collection and interpretation of most experimental data is carried out using computers, and physical models in computable form, with various degrees of complexity and sophistication, are utilized in all fields of science. However, full prediction of physical and chemical phenomena based on the basic laws of Nature, using computer simulations, is a revolution still in the making, and it involves some formidable theoretical and computational challenges. We illustrate the progress and successes obtained in recent years in predicting fundamental properties of materials in condensedmore » phases and at the nanoscale, using ab-initio, quantum simulations. We also discuss open issues related to the validation of the approximate, first principles theories used in large scale simulations, and the resulting complex interplay between computation and experiment. Finally, we describe some applications, with focus on nanostructures and liquids, both at ambient and under extreme conditions.« less

  4. Models and Simulations as a Service: Exploring the Use of Galaxy for Delivering Computational Models

    PubMed Central

    Walker, Mark A.; Madduri, Ravi; Rodriguez, Alex; Greenstein, Joseph L.; Winslow, Raimond L.

    2016-01-01

    We describe the ways in which Galaxy, a web-based reproducible research platform, can be used for web-based sharing of complex computational models. Galaxy allows users to seamlessly customize and run simulations on cloud computing resources, a concept we refer to as Models and Simulations as a Service (MaSS). To illustrate this application of Galaxy, we have developed a tool suite for simulating a high spatial-resolution model of the cardiac Ca2+ spark that requires supercomputing resources for execution. We also present tools for simulating models encoded in the SBML and CellML model description languages, thus demonstrating how Galaxy’s reproducible research features can be leveraged by existing technologies. Finally, we demonstrate how the Galaxy workflow editor can be used to compose integrative models from constituent submodules. This work represents an important novel approach, to our knowledge, to making computational simulations more accessible to the broader scientific community. PMID:26958881

  5. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  6. On the concept of the interactive information and simulation system for gas dynamics and multiphysics problems

    NASA Astrophysics Data System (ADS)

    Bessonov, O.; Silvestrov, P.

    2017-02-01

    This paper describes the general idea and the first implementation of the Interactive information and simulation system - an integrated environment that combines computational modules for modeling the aerodynamics and aerothermodynamics of re-entry space vehicles with the large collection of different information materials on this topic. The internal organization and the composition of the system are described and illustrated. Examples of the computational and information output are presented. The system has the unified implementation for Windows and Linux operation systems and can be deployed on any modern high-performance personal computer.

  7. Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Hinkley, Jeffrey A.

    2003-01-01

    The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.

  8. Cell illustrator 4.0: a computational platform for systems biology.

    PubMed

    Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru

    2011-01-01

    Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.

  9. Cell Illustrator 4.0: a computational platform for systems biology.

    PubMed

    Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru

    2010-01-01

    Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.

  10. Computer-Simulation Surrogates for Optimization: Application to Trapezoidal Ducts and Axisymmetric Bodies

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Paraschivoiu, Marius; Yesilyurt, Serhat; Patera, Anthony T.

    1995-01-01

    Engineering design and optimization efforts using computational systems rapidly become resource intensive. The goal of the surrogate-based approach is to perform a complete optimization with limited resources. In this paper we present a Bayesian-validated approach that informs the designer as to how well the surrogate performs; in particular, our surrogate framework provides precise (albeit probabilistic) bounds on the errors incurred in the surrogate-for-simulation substitution. The theory and algorithms of our computer{simulation surrogate framework are first described. The utility of the framework is then demonstrated through two illustrative examples: maximization of the flowrate of fully developed ow in trapezoidal ducts; and design of an axisymmetric body that achieves a target Stokes drag.

  11. Applications of CFD and visualization techniques

    NASA Technical Reports Server (NTRS)

    Saunders, James H.; Brown, Susan T.; Crisafulli, Jeffrey J.; Southern, Leslie A.

    1992-01-01

    In this paper, three applications are presented to illustrate current techniques for flow calculation and visualization. The first two applications use a commercial computational fluid dynamics (CFD) code, FLUENT, performed on a Cray Y-MP. The results are animated with the aid of data visualization software, apE. The third application simulates a particulate deposition pattern using techniques inspired by developments in nonlinear dynamical systems. These computations were performed on personal computers.

  12. Computational Phenotyping in Psychiatry: A Worked Example

    PubMed Central

    2016-01-01

    Abstract Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology—structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry. PMID:27517087

  13. Computational Phenotyping in Psychiatry: A Worked Example.

    PubMed

    Schwartenbeck, Philipp; Friston, Karl

    2016-01-01

    Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology-structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry.

  14. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    ERIC Educational Resources Information Center

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  15. A Computer Simulation Comparing the Incentive Structures of Dictatorships and Democracies

    ERIC Educational Resources Information Center

    Nishikawa, Katsuo A.; Jaeger, Joseph

    2011-01-01

    The draw of simulations is that by replicating a simplified version of reality they can illustrate the repercussions that individual choices create. Students can play the role of a judge, an ambassador, or a parliamentarian and can experience first hand how their decisions play out. As a discipline, we assume that such practices are an improvement…

  16. Adding an Intelligent Tutoring System to an Existing Training Simulation

    DTIC Science & Technology

    2006-01-01

    to apply information in a job should be the goal of training. Also, conventional IMI is not able to meaningfully incorporate use of free - play simulators...incorporating desktop free - play simulators into computer-based training since the software can stand in for a human tutor in all the roles. Existing IMI...2. ITS can integrate free - play simulators and IMI BC2010 ITS DESCRIPTION Overview Figure 3 illustrates the interaction between BC2010, ITS

  17. Simulation of the stress computation in shells

    NASA Technical Reports Server (NTRS)

    Salama, M.; Utku, S.

    1978-01-01

    A self-teaching computer program is described, whereby the stresses in thin shells can be computed with good accuracy using the best fit approach. The program is designed for use in interactive game mode to allow the structural engineer to learn about (1) the major sources of difficulties and associated errors in the computation of stresses in thin shells, (2) possible ways to reduce the errors, and (3) trade-off between computational cost and accuracy. Included are derivation of the computational approach, program description, and several examples illustrating the program usage.

  18. Divide and conquer approach to quantum Hamiltonian simulation

    NASA Astrophysics Data System (ADS)

    Hadfield, Stuart; Papageorgiou, Anargyros

    2018-04-01

    We show a divide and conquer approach for simulating quantum mechanical systems on quantum computers. We can obtain fast simulation algorithms using Hamiltonian structure. Considering a sum of Hamiltonians we split them into groups, simulate each group separately, and combine the partial results. Simulation is customized to take advantage of the properties of each group, and hence yield refined bounds to the overall simulation cost. We illustrate our results using the electronic structure problem of quantum chemistry, where we obtain significantly improved cost estimates under very mild assumptions.

  19. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  20. Human-computer interaction in multitask situations

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1977-01-01

    Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.

  1. PyNEST: A Convenient Interface to the NEST Simulator.

    PubMed

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used.

  2. PyNEST: A Convenient Interface to the NEST Simulator

    PubMed Central

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used. PMID:19198667

  3. The Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Littman, M. G.

    1986-01-01

    The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.

  4. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  5. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  6. Dynamic simulation of Static Var Compensators in distribution systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koessler, R.J.

    1992-08-01

    This paper is a system study guide for the correction of voltage dips due to large motor startups with Static Var Compensators (SVCs). The method utilizes time simulations, which are an important aid in the equipment design and specification. The paper illustrates the process of setting-up a computer model and performing time simulations. The study process is demonstrated through an example, the Shawnee feeder in the Niagara Mohawk Power Corporation service area.

  7. 3D numerical simulation of transient processes in hydraulic turbines

    NASA Astrophysics Data System (ADS)

    Cherny, S.; Chirkov, D.; Bannikov, D.; Lapin, V.; Skorospelov, V.; Eshkunova, I.; Avdushenko, A.

    2010-08-01

    An approach for numerical simulation of 3D hydraulic turbine flows in transient operating regimes is presented. The method is based on a coupled solution of incompressible RANS equations, runner rotation equation, and water hammer equations. The issue of setting appropriate boundary conditions is considered in detail. As an illustration, the simulation results for runaway process are presented. The evolution of vortex structure and its effect on computed runaway traces are analyzed.

  8. Cross-Shear Implementation in Sliding-Distance-Coupled Finite Element Analysis of Wear in Metal-on-Polyethylene Total Joint Arthroplasty: Intervertebral Total Disc Replacement as an Illustrative Application

    PubMed Central

    Goreham-Voss, Curtis M.; Hyde, Philip J.; Hall, Richard M.; Fisher, John; Brown, Thomas D.

    2010-01-01

    Computational simulations of wear of orthopaedic total joint replacement implants have proven to valuably complement laboratory physical simulators, for pre-clinical estimation of abrasive/adhesive wear propensity. This class of numerical formulations has primarily involved implementation of the Archard/Lancaster relationship, with local wear computed as the product of (finite element) contact stress, sliding speed, and a bearing-couple-dependent wear factor. The present study introduces an augmentation, whereby the influence of interface cross-shearing motion transverse to the prevailing molecular orientation of the polyethylene articular surface is taken into account in assigning the instantaneous local wear factor. The formulation augment is implemented within a widely-utilized commercial finite element software environment (ABAQUS). Using a contemporary metal-on-polyethylene total disc replacement (ProDisc-L) as an illustrative implant, physically validated computational results are presented to document the role of cross-shearing effects in alternative laboratory consensus testing protocols. Going forward, this formulation permits systematically accounting for cross-shear effects in parametric computational wear studies of metal-on-polyethylene joint replacements, heretofore a substantial limitation of such analyses. PMID:20399432

  9. Computational Fluid Dynamics of Whole-Body Aircraft

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh

    1999-01-01

    The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  10. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  11. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  12. Evaluating Computer-Based Simulations, Multimedia and Animations that Help Integrate Blended Learning with Lectures in First Year Statistics

    ERIC Educational Resources Information Center

    Neumann, David L.; Neumann, Michelle M.; Hood, Michelle

    2011-01-01

    The discipline of statistics seems well suited to the integration of technology in a lecture as a means to enhance student learning and engagement. Technology can be used to simulate statistical concepts, create interactive learning exercises, and illustrate real world applications of statistics. The present study aimed to better understand the…

  13. Strength computation of forged parts taking into account strain hardening and damage

    NASA Astrophysics Data System (ADS)

    Cristescu, Michel L.

    2004-06-01

    Modern non-linear simulation software, such as FORGE 3 (registered trade mark of TRANSVALOR), are able to compute the residual stresses, the strain hardening and the damage during the forging process. A thermally dependent elasto-visco-plastic law is used to simulate the behavior of the material of the hot forged piece. A modified Lemaitre law coupled with elasticiy, plasticity and thermic is used to simulate the damage. After the simulation of the different steps of the forging process, the part is cooled and then virtually machined, in order to obtain the finished part. An elastic computation is then performed to equilibrate the residual stresses, so that we obtain the true geometry of the finished part after machining. The response of the part to the loadings it will sustain during it's life is then computed, taking into account the residual stresses, the strain hardening and the damage that occur during forging. This process is illustrated by the forging, virtual machining and stress analysis of an aluminium wheel hub.

  14. Simulating Quantile Models with Applications to Economics and Management

    NASA Astrophysics Data System (ADS)

    Machado, José A. F.

    2010-05-01

    The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.

  15. Stochastic hybrid systems for studying biochemical processes.

    PubMed

    Singh, Abhyudai; Hespanha, João P

    2010-11-13

    Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.

  16. Making Sense of the Data from Complex Assessments.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Steinberg, Linda S.; Breyer, F. Jay; Almond, Russell G.; Johnson, Lynn

    2002-01-01

    Presents a design framework that incorporates integrated structures for modeling knowledge and skills, designing tasks, and extracting and synthesizing evidence. Illustrates these ideas in the context of a project that assesses problem solving in dental hygiene through computer-based simulations. (SLD)

  17. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    NASA Astrophysics Data System (ADS)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  18. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou Yu, E-mail: yzou@Princeton.ED; Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED; Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED

    2010-07-20

    The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations bymore » exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.« less

  19. Development of mpi_EPIC model for global agroecosystem modeling

    DOE PAGES

    Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...

    2014-12-31

    Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less

  20. Protocols for Molecular Dynamics Simulations of RNA Nanostructures.

    PubMed

    Kim, Taejin; Kasprzak, Wojciech K; Shapiro, Bruce A

    2017-01-01

    Molecular dynamics (MD) simulations have been used as one of the main research tools to study a wide range of biological systems and bridge the gap between X-ray crystallography or NMR structures and biological mechanism. In the field of RNA nanostructures, MD simulations have been used to fix steric clashes in computationally designed RNA nanostructures, characterize the dynamics, and investigate the interaction between RNA and other biomolecules such as delivery agents and membranes.In this chapter we present examples of computational protocols for molecular dynamics simulations in explicit and implicit solvent using the Amber Molecular Dynamics Package. We also show examples of post-simulation analysis steps and briefly mention selected tools beyond the Amber package. Limitations of the methods, tools, and protocols are also discussed. Most of the examples are illustrated for a small RNA duplex (helix), but the protocols are applicable to any nucleic acid structure, subject only to the computational speed and memory limitations of the hardware available to the user.

  1. AHPCRC (Army High Performance Computing Rsearch Center) Bulletin. Volume 1, Issue 4

    DTIC Science & Technology

    2011-01-01

    Computational and Mathematical Engineering, Stanford University esgs@stanford.edu (650) 723-3764 Molecular Dynamics Models of Antimicrobial ...simulations using low-fidelity Reynolds-av- eraged models illustrate the limited predictive capabili- ties of these schemes. The predictions for scalar and...driving force. The AHPCRC group has used their models to predict nonuniform concentra- tion profiles across small channels as a result of variations

  2. Framework and algorithms for illustrative visualizations of time-varying flows on unstructured meshes

    DOE PAGES

    Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; ...

    2016-03-17

    Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less

  3. eScience for molecular-scale simulations and the eMinerals project.

    PubMed

    Salje, E K H; Artacho, E; Austen, K F; Bruin, R P; Calleja, M; Chappell, H F; Chiang, G-T; Dove, M T; Frame, I; Goodwin, A L; Kleese van Dam, K; Marmier, A; Parker, S C; Pruneda, J M; Todorov, I T; Trachenko, K; Tyer, R P; Walker, A M; White, T O H

    2009-03-13

    We review the work carried out within the eMinerals project to develop eScience solutions that facilitate a new generation of molecular-scale simulation work. Technological developments include integration of compute and data systems, developing of collaborative frameworks and new researcher-friendly tools for grid job submission, XML data representation, information delivery, metadata harvesting and metadata management. A number of diverse science applications will illustrate how these tools are being used for large parameter-sweep studies, an emerging type of study for which the integration of computing, data and collaboration is essential.

  4. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  5. Simulating effectiveness of helicopter evasive manoeuvres to RPG attack

    NASA Astrophysics Data System (ADS)

    Anderson, D.; Thomson, D. G.

    2010-04-01

    The survivability of helicopters under attack by ground troops using rocket propelled grenades has been amply illustrated over the past decade. Given that an RPG is unguided and it is infeasible to cover helicopters in thick armour, existing optical countermeasures are ineffective - the solution is to compute an evasive manoeuvre. In this paper, an RPG/helicopter engagement model is presented. Manoeuvre profiles are defined in the missile approach warning sensor camera image plane using a local maximum acceleration vector. Required control inputs are then computed using inverse simulation techniques. Assessments of platform survivability to several engagement scenarios are presented.

  6. Integration of scheduling and discrete event simulation systems to improve production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2016-08-01

    The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.

  7. Materials by numbers: Computations as tools of discovery

    PubMed Central

    Landman, Uzi

    2005-01-01

    Current issues pertaining to theoretical simulations of materials, with a focus on systems of nanometer-scale dimensions, are discussed. The use of atomistic simulations as high-resolution numerical experiments, enabling and guiding formulation and testing of analytic theoretical descriptions, is demonstrated through studies of the generation and breakup of nanojets, which have led to the derivation of a stochastic hydrodynamic description. Subsequently, I illustrate the use of computations and simulations as tools of discovery, with examples that include the self-organized formation of nanowires, the surprising nanocatalytic activity of small aggregates of gold that, in the bulk form, is notorious for being chemically inert, and the emergence of rotating electron molecules in two-dimensional quantum dots. I conclude with a brief discussion of some key challenges in nanomaterials simulations. PMID:15870210

  8. Accelerating functional verification of an integrated circuit

    DOEpatents

    Deindl, Michael; Ruedinger, Jeffrey Joseph; Zoellin, Christian G.

    2015-10-27

    Illustrative embodiments include a method, system, and computer program product for accelerating functional verification in simulation testing of an integrated circuit (IC). Using a processor and a memory, a serial operation is replaced with a direct register access operation, wherein the serial operation is configured to perform bit shifting operation using a register in a simulation of the IC. The serial operation is blocked from manipulating the register in the simulation of the IC. Using the register in the simulation of the IC, the direct register access operation is performed in place of the serial operation.

  9. Computation of incompressible viscous flows through artificial heart devices with moving boundaries

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Rogers, Stuart; Kwak, Dochan; Chang, I.-DEE

    1991-01-01

    The extension of computational fluid dynamics techniques to artificial heart flow simulations is illustrated. Unsteady incompressible Navier-Stokes equations written in 3-D generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. The efficiency and robustness of the time accurate formulation of the algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated with experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapping grid embedding scheme are used, respectively. Steady state solutions for the flow through a tilting disk heart valve was compared against experimental measurements. Good agreement was obtained. The flow computation during the valve opening and closing is carried out to illustrate the moving boundary capability.

  10. Improvements to the FATOLA computer program including nosewheel steering: Supplemental instruction manual

    NASA Technical Reports Server (NTRS)

    Carden, H. D.; Mcgehee, J. R.

    1978-01-01

    Modifications to a multidegree of freedom flexible aircraft take-off and landing analysis (FATOLA) computer program, which improved its simulation capabilities, are discussed, and supplemental instructions for use of the program are included. Sample analytical results which illustrate the capabilities of an added nosewheel steering option indicate consistent behavior of the airplane tracking, attitude, motions, and loads for the landing cases and steering situations which were investigated.

  11. Extraordinary Oscillations of an Ordinary Forced Pendulum

    ERIC Educational Resources Information Center

    Butikov, Eugene I.

    2008-01-01

    Several well-known and newly discovered counterintuitive regular and chaotic modes of the sinusoidally driven rigid planar pendulum are discussed and illustrated by computer simulations. The software supporting the investigation offers many interesting predefined examples that demonstrate various peculiarities of this famous physical model.…

  12. Simulation of multistage turbine flows

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Mulac, Richard A.

    1987-01-01

    A flow model has been developed for analyzing multistage turbomachinery flows. This model, referred to as the average passage flow model, describes the time-averaged flow field with a typical passage of a blade row embedded within a multistage configuration. Computer resource requirements, supporting empirical modeling, formulation code development, and multitasking and storage are discussed. Illustrations from simulations of the space shuttle main engine (SSME) fuel turbine performed to date are given.

  13. The discovery of the causes of leprosy: A computational analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corruble, V.; Ganascia, J.G.

    1996-12-31

    The role played by the inductive inference has been studied extensively in the field of Scientific Discovery. The work presented here tackles the problem of induction in medical research. The discovery of the causes of leprosy is analyzed and simulated using computational means. An inductive algorithm is proposed, which is successful in simulating some essential steps in the progress of the understanding of the disease. It also allows us to simulate the false reasoning of previous centuries through the introduction of some medical a priori inherited form archaic medicine. Corroborating previous research, this problem illustrates the importance of the socialmore » and cultural environment on the way the inductive inference is performed in medicine.« less

  14. System reliability of randomly vibrating structures: Computational modeling and laboratory testing

    NASA Astrophysics Data System (ADS)

    Sundar, V. S.; Ammanagi, S.; Manohar, C. S.

    2015-09-01

    The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.

  15. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  16. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  17. Composite Load Spectra for Select Space Propulsion Structural Components

    NASA Technical Reports Server (NTRS)

    Ho, Hing W.; Newell, James F.

    1994-01-01

    Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.

  18. An Interactive Simulation Program for Exploring Computational Models of Auto-Associative Memory.

    PubMed

    Fink, Christian G

    2017-01-01

    While neuroscience students typically learn about activity-dependent plasticity early in their education, they often struggle to conceptually connect modification at the synaptic scale with network-level neuronal dynamics, not to mention with their own everyday experience of recalling a memory. We have developed an interactive simulation program (based on the Hopfield model of auto-associative memory) that enables the user to visualize the connections generated by any pattern of neural activity, as well as to simulate the network dynamics resulting from such connectivity. An accompanying set of student exercises introduces the concepts of pattern completion, pattern separation, and sparse versus distributed neural representations. Results from a conceptual assessment administered before and after students worked through these exercises indicate that the simulation program is a useful pedagogical tool for illustrating fundamental concepts of computational models of memory.

  19. Cyber Technology for Materials and Structures in Aeronautics and Aerospace

    NASA Technical Reports Server (NTRS)

    Pipes, R. Byron

    1999-01-01

    This report summarizes efforts undertaken during the 1998-99 program year and includes a survey of the field of computational mechanics, a discussion of biomimetics and intelligent simulation, a survey of the field of biomimetics, an illustration of biomimetics and computational mechanics through the example of the high performance composite tensile structure. In addition, the preliminary results of a state-of-the art survey of composite materials technology is presented.

  20. Modeling the data management system of Space Station Freedom with DEPEND

    NASA Technical Reports Server (NTRS)

    Olson, Daniel P.; Iyer, Ravishankar K.; Boyd, Mark A.

    1993-01-01

    Some of the features and capabilities of the DEPEND simulation-based modeling tool are described. A study of a 1553B local bus subsystem of the Space Station Freedom Data Management System (SSF DMS) is used to illustrate some types of system behavior that can be important to reliability and performance evaluations of this type of spacecraft. A DEPEND model of the subsystem is used to illustrate how these types of system behavior can be modeled, and shows what kinds of engineering and design questions can be answered through the use of these modeling techniques. DEPEND's process-based simulation environment is shown to provide a flexible method for modeling complex interactions between hardware and software elements of a fault-tolerant computing system.

  1. Fast simulation techniques for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1987-01-01

    Techniques for simulating a switching converter are examined. The state equations for the equivalent circuits, which represent the switching converter, are presented and explained. The uses of the Newton-Raphson iteration, low ripple approximation, half-cycle symmetry, and discrete time equations to compute the interval durations are described. An example is presented in which these methods are illustrated by applying them to a parallel-loaded resonant inverter with three equivalent circuits for its continuous mode of operation.

  2. Computer simulations of transport through membranes: passive diffusion, pores, channels and transporters.

    PubMed

    Tieleman, D Peter

    2006-10-01

    A key function of biological membranes is to provide mechanisms for the controlled transport of ions, nutrients, metabolites, peptides and proteins between a cell and its environment. We are using computer simulations to study several processes involved in transport. In model membranes, the distribution of small molecules can be accurately calculated; we are making progress towards understanding the factors that determine the partitioning behaviour in the inhomogeneous lipid environment, with implications for drug distribution, membrane protein folding and the energetics of voltage gating. Lipid bilayers can be simulated at a scale that is sufficiently large to study significant defects, such as those caused by electroporation. Computer simulations of complex membrane proteins, such as potassium channels and ATP-binding cassette (ABC) transporters, can give detailed information about the atomistic dynamics that form the basis of ion transport, selectivity, conformational change and the molecular mechanism of ATP-driven transport. This is illustrated in the present review with recent simulation studies of the voltage-gated potassium channel KvAP and the ABC transporter BtuCD.

  3. Local rules simulation of the kinetics of virus capsid self-assembly.

    PubMed

    Schwartz, R; Shor, P W; Prevelige, P E; Berger, B

    1998-12-01

    A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.

  4. QM/MM free energy simulations: recent progress and challenges

    PubMed Central

    Lu, Xiya; Fang, Dong; Ito, Shingo; Okamoto, Yuko; Ovchinnikov, Victor

    2016-01-01

    Due to the higher computational cost relative to pure molecular mechanical (MM) simulations, hybrid quantum mechanical/molecular mechanical (QM/MM) free energy simulations particularly require a careful consideration of balancing computational cost and accuracy. Here we review several recent developments in free energy methods most relevant to QM/MM simulations and discuss several topics motivated by these developments using simple but informative examples that involve processes in water. For chemical reactions, we highlight the value of invoking enhanced sampling technique (e.g., replica-exchange) in umbrella sampling calculations and the value of including collective environmental variables (e.g., hydration level) in metadynamics simulations; we also illustrate the sensitivity of string calculations, especially free energy along the path, to various parameters in the computation. Alchemical free energy simulations with a specific thermodynamic cycle are used to probe the effect of including the first solvation shell into the QM region when computing solvation free energies. For cases where high-level QM/MM potential functions are needed, we analyze two different approaches: the QM/MM-MFEP method of Yang and co-workers and perturbative correction to low-level QM/MM free energy results. For the examples analyzed here, both approaches seem productive although care needs to be exercised when analyzing the perturbative corrections. PMID:27563170

  5. Simulation Speed Analysis and Improvements of Modelica Models for Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jorissen, Filip; Wetter, Michael; Helsen, Lieve

    This paper presents an approach for speeding up Modelica models. Insight is provided into how Modelica models are solved and what determines the tool’s computational speed. Aspects such as algebraic loops, code efficiency and integrator choice are discussed. This is illustrated using simple building simulation examples and Dymola. The generality of the work is in some cases verified using OpenModelica. Using this approach, a medium sized office building including building envelope, heating ventilation and air conditioning (HVAC) systems and control strategy can be simulated at a speed five hundred times faster than real time.

  6. Efficient evaluation of wireless real-time control networks.

    PubMed

    Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon

    2015-02-11

    In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.

  7. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  8. The Application of Artificial Intelligence Principles to Teaching and Training

    ERIC Educational Resources Information Center

    Shaw, Keith

    2008-01-01

    This paper compares and contrasts the use of AI principles in industrial training with more normal computer-based training (CBT) approaches. A number of applications of CBT are illustrated (for example simulations, tutorial presentations, fault diagnosis, management games, industrial relations exercises) and compared with an alternative approach…

  9. Computational effects of inlet representation on powered hypersonic, airbreathing models

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1993-01-01

    Computational results are presented to illustrate the powered aftbody effects of representing the scramjet inlet on a generic hypersonic vehicle with a fairing, to divert the external flow, as compared to an operating flow-through scramjet inlet. This study is pertinent to the ground testing of hypersonic, airbreathing models employing scramjet exhaust flow simulation in typical small-scale hypersonic wind tunnels. The comparison of aftbody effects due to inlet representation is well-suited for computational study, since small model size typically precludes the ability to ingest flow into the inlet and perform exhaust simulation at the same time. Two-dimensional analysis indicates that, although flowfield differences exist for the two types of inlet representations, little, if any, difference in surface aftbody characteristics is caused by fairing over the inlet.

  10. fissioncore: A desktop-computer simulation of a fission-bomb core

    NASA Astrophysics Data System (ADS)

    Cameron Reed, B.; Rohe, Klaus

    2014-10-01

    A computer program, fissioncore, has been developed to deterministically simulate the growth of the number of neutrons within an exploding fission-bomb core. The program allows users to explore the dependence of criticality conditions on parameters such as nuclear cross-sections, core radius, number of secondary neutrons liberated per fission, and the distance between nuclei. Simulations clearly illustrate the existence of a critical radius given a particular set of parameter values, as well as how the exponential growth of the neutron population (the condition that characterizes criticality) depends on these parameters. No understanding of neutron diffusion theory is necessary to appreciate the logic of the program or the results. The code is freely available in FORTRAN, C, and Java and is configured so that modifications to accommodate more refined physical conditions are possible.

  11. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  12. [Design of a miniaturized blood temperature-varying system based on computer distributed control].

    PubMed

    Xu, Qiang; Zhou, Zhaoying; Peng, Jiegang; Zhu, Junhua

    2007-10-01

    Blood temperature-varying has been widely applied in clinical practice such as extracorporeal circulation for whole-body perfusion hyperthermia (WBPH), body rewarming and blood temperature-varying in organ transplantation. This paper reports a novel DCS (Computer distributed control)-based blood temperature-varying system which includes therapy management function and whose hardware and software can be extended easily. Simulation results illustrate that this system provides precise temperature control with good performance in various operation conditions.

  13. Dynamics of Electronically Excited Species in Gaseous and Condensed Phase

    DTIC Science & Technology

    1989-12-01

    heatbath models of condensed phase helium, (3) development of models of condensed phase hydrogen and (4) development of simulation procedures for solution... Modelling and Computer Experiments 93 Introduction 93 Monte Carlo Simulations of Helium Bubble States 94 Heatbath Models f6r Helium Bubble States 114...ILLUSTRATIONS 1 He-He* potential energy curves and couplings for two-state model . 40 2 Cross section for He(1P) quenching to He( 3S) 42 3 Opacity

  14. Modeling And Simulation Of Bar Code Scanners Using Computer Aided Design Software

    NASA Astrophysics Data System (ADS)

    Hellekson, Ron; Campbell, Scott

    1988-06-01

    Many optical systems have demanding requirements to package the system in a small 3 dimensional space. The use of computer graphic tools can be a tremendous aid to the designer in analyzing the optical problems created by smaller and less costly systems. The Spectra Physics grocery store bar code scanner employs an especially complex 3 dimensional scan pattern to read bar code labels. By using a specially written program which interfaces with a computer aided design system, we have simulated many of the functions of this complex optical system. In this paper we will illustrate how a recent version of the scanner has been designed. We will discuss the use of computer graphics in the design process including interactive tweaking of the scan pattern, analysis of collected light, analysis of the scan pattern density, and analysis of the manufacturing tolerances used to build the scanner.

  15. The Living Heart Project: A robust and integrative simulator for human heart function.

    PubMed

    Baillargeon, Brian; Rebelo, Nuno; Fox, David D; Taylor, Robert L; Kuhl, Ellen

    2014-11-01

    The heart is not only our most vital, but also our most complex organ: Precisely controlled by the interplay of electrical and mechanical fields, it consists of four chambers and four valves, which act in concert to regulate its filling, ejection, and overall pump function. While numerous computational models exist to study either the electrical or the mechanical response of its individual chambers, the integrative electro-mechanical response of the whole heart remains poorly understood. Here we present a proof-of-concept simulator for a four-chamber human heart model created from computer topography and magnetic resonance images. We illustrate the governing equations of excitation-contraction coupling and discretize them using a single, unified finite element environment. To illustrate the basic features of our model, we visualize the electrical potential and the mechanical deformation across the human heart throughout its cardiac cycle. To compare our simulation against common metrics of cardiac function, we extract the pressure-volume relationship and show that it agrees well with clinical observations. Our prototype model allows us to explore and understand the key features, physics, and technologies to create an integrative, predictive model of the living human heart. Ultimately, our simulator will open opportunities to probe landscapes of clinical parameters, and guide device design and treatment planning in cardiac diseases such as stenosis, regurgitation, or prolapse of the aortic, pulmonary, tricuspid, or mitral valve.

  16. Hierarchical Simulation of Hot Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Murthy, P. L. N.; Singhal, S. N.

    1993-01-01

    Computational procedures are described to simulate the thermal and mechanical behavior of high temperature metal matrix composites (HT-MMC) in the following three broad areas: (1) Behavior of HT-MMC's from micromechanics to laminate via Metal Matrix Composite Analyzer (METCAN), (2) tailoring of HT-MMC behavior for optimum specific performance via Metal Matrix Laminate Tailoring (MMLT), and (3) HT-MMC structural response for hot structural components via High Temperature Composite Analyzer (HITCAN). Representative results from each area are presented to illustrate the effectiveness of computational simulation procedures. The sample case results show that METCAN can be used to simulate material behavior such as strength, stress-strain response, and cyclic life in HTMMC's; MMLT can be used to tailor the fabrication process for optimum performance such as that for in-service load carrying capacity of HT-MMC's; and HITCAN can be used to evaluate static fracture and fatigue life of hot pressurized metal matrix composite rings.

  17. Neurophysiological model of the normal and abnormal human pupil

    NASA Technical Reports Server (NTRS)

    Krenz, W.; Robin, M.; Barez, S.; Stark, L.

    1985-01-01

    Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.

  18. Gravity Behaves Like That?

    NASA Astrophysics Data System (ADS)

    Pazmino, John

    2007-02-01

    Many concepts of chaotic action in astrodynamics can be appreciated through simulations with home computers and software. Many astrodynamical cases are illustrated. Although chaos theory is now applied to spaceflight trajectories, this presentation employs only inert bodies with no onboard impulse, e.g., from rockets or outgassing. Other nongravitational effects are also ignored, such as atmosphere drag, solar pressure, and radiation. The ability to simulate gravity behavior, even if not completely rigorous, on small mass-market computers allows a fuller understanding of the new approach to astrodynamics by home astronomers, scientists outside orbital mechanics, and students in middle and high school. The simulations can also help a lay audience visualize gravity behavior during press conferences, briefings, and public lectures. No review, evaluation, critique of the programs shown in this presentation is intended. The results from these simulations are not valid for - and must not be used for - making earth-colliding predictions.

  19. PyFly: A fast, portable aerodynamics simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.

    Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less

  20. PyFly: A fast, portable aerodynamics simulator

    DOE PAGES

    Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.; ...

    2018-03-14

    Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less

  1. The Integrated Plasma Simulator: A Flexible Python Framework for Coupled Multiphysics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, Samantha S; Elwasif, Wael R; Bernholdt, David E

    2011-11-01

    High-fidelity coupled multiphysics simulations are an increasingly important aspect of computational science. In many domains, however, there has been very limited experience with simulations of this sort, therefore research in coupled multiphysics often requires computational frameworks with significant flexibility to respond to the changing directions of the physics and mathematics. This paper presents the Integrated Plasma Simulator (IPS), a framework designed for loosely coupled simulations of fusion plasmas. The IPS provides users with a simple component architecture into which a wide range of existing plasma physics codes can be inserted as components. Simulations can take advantage of multiple levels ofmore » parallelism supported in the IPS, and can be controlled by a high-level ``driver'' component, or by other coordination mechanisms, such as an asynchronous event service. We describe the requirements and design of the framework, and how they were implemented in the Python language. We also illustrate the flexibility of the framework by providing examples of different types of simulations that utilize various features of the IPS.« less

  2. Explicit polarization (X-Pol) potential using ab initio molecular orbital theory and density functional theory.

    PubMed

    Song, Lingchun; Han, Jaebeom; Lin, Yen-lin; Xie, Wangshen; Gao, Jiali

    2009-10-29

    The explicit polarization (X-Pol) method has been examined using ab initio molecular orbital theory and density functional theory. The X-Pol potential was designed to provide a novel theoretical framework for developing next-generation force fields for biomolecular simulations. Importantly, the X-Pol potential is a general method, which can be employed with any level of electronic structure theory. The present study illustrates the implementation of the X-Pol method using ab initio Hartree-Fock theory and hybrid density functional theory. The computational results are illustrated by considering a set of bimolecular complexes of small organic molecules and ions with water. The computed interaction energies and hydrogen bond geometries are in good accord with CCSD(T) calculations and B3LYP/aug-cc-pVDZ optimizations.

  3. Developing Tools for Research on School Leadership Development: An Illustrative Case of a Computer Simulation

    ERIC Educational Resources Information Center

    Showanasai, Parinya; Lu, Jiafang; Hallinger, Philip

    2013-01-01

    Purpose: The extant literature on school leadership development is dominated by conceptual analysis, descriptive studies of current practice, critiques of current practice, and prescriptions for better ways to approach practice. Relatively few studies have examined impact of leadership development using experimental methods, among which even fewer…

  4. Learning and Understanding System Stability Using Illustrative Dynamic Texture Examples

    ERIC Educational Resources Information Center

    Liu, Huaping; Xiao, Wei; Zhao, Hongyan; Sun, Fuchun

    2014-01-01

    System stability is a basic concept in courses on dynamic system analysis and control for undergraduate students with computer science backgrounds. Typically, this was taught using a simple simulation example of an inverted pendulum. Unfortunately, many difficult issues arise in the learning and understanding of the concepts of stability,…

  5. Speckle imaging techniques of the turbulence degraded images

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Huang, Zongfu; Mao, Hongjun; Liang, Yonghui

    2018-03-01

    We propose a speckle imaging algorithm in which we use the improved form of spectral ratio to obtain the Fried parameter, we also use a filter to reduce the high frequency noise effects. Our algorithm makes an improvement in the quality of the reconstructed images. The performance is illustrated by computer simulations.

  6. SYSTID - A flexible tool for the analysis of communication systems.

    NASA Technical Reports Server (NTRS)

    Dawson, C. T.; Tranter, W. H.

    1972-01-01

    Description of the System Time Domain Simulation (SYSTID) computer-aided analysis program which is specifically structured for communication systems analysis. The SYSTID program is user oriented so that very little knowledge of computer techniques and very little programming ability are required for proper application. The program is designed so that the user can go from a system block diagram to an accurate simulation by simply programming a single English language statement for each block in the system. The mathematical and functional models available in the SYSTID library are presented. An example problem is given which illustrates the ease of modeling communication systems. Examples of the outputs available are presented, and proposed improvements are summarized.

  7. Practical Unitary Simulator for Non-Markovian Complex Processes

    NASA Astrophysics Data System (ADS)

    Binder, Felix C.; Thompson, Jayne; Gu, Mile

    2018-06-01

    Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.

  8. Steady-state and dynamic characteristics of a 20-kHz spacecraft power system - Control of harmonic resonance

    NASA Technical Reports Server (NTRS)

    Wasynczuk, O.; Krause, P. C.; Biess, J. J.; Kapustka, R.

    1990-01-01

    A detailed computer simulation was used to illustrate the steady-state and dynamic operating characteristics of a 20-kHz resonant spacecraft power system. The simulated system consists of a parallel-connected set of DC-inductor resonant inverters (drivers), a 440-V cable, a node transformer, a 220-V cable, and a transformer-rectifier-filter (TRF) AC-to-DC receiver load. Also included in the system are a 1-kW 0.8-pf RL load and a double-LC filter connected at the receiving end of the 20-kHz AC system. The detailed computer simulation was used to illustrate the normal steady-state operating characteristics and the dynamic system performance following, for example, TRF startup. It is shown that without any filtering the given system exhibits harmonic resonances due to an interaction between the switching of the source and/or load converters and the AC system. However, the double-LC filter at the receiving-end of the AC system and harmonic traps connected in series with each of the drivers significantly reduce the harmonic distortion of the 20-kHz bus voltage. Significant additional improvement in the waveform quality can be achieved by including a double-LC filter with each driver.

  9. Simulation software: engineer processes before reengineering.

    PubMed

    Lepley, C J

    2001-01-01

    People make decisions all the time using intuition. But what happens when you are asked: "Are you sure your predictions are accurate? How much will a mistake cost? What are the risks associated with this change?" Once a new process is engineered, it is difficult to analyze what would have been different if other options had been chosen. Simulating a process can help senior clinical officers solve complex patient flow problems and avoid wasted efforts. Simulation software can give you the data you need to make decisions. The author introduces concepts, methodologies, and applications of computer aided simulation to illustrate their use in making decisions to improve workflow design.

  10. Simulation of wetlands forest vegetation dynamics

    USGS Publications Warehouse

    Phipps, R.L.

    1979-01-01

    A computer program, SWAMP, was designed to simulate the effects of flood frequency and depth to water table on southern wetlands forest vegetation dynamics. By incorporating these hydrologic characteristics into the model, forest vegetation and vegetation dynamics can be simulated. The model, based on data from the White River National Wildlife Refuge near De Witt, Arkansas, "grows" individual trees on a 20 x 20-m plot taking into account effects on the tree growth of flooding, depth to water table, shade tolerance, overtopping and crowding, and probability of death and reproduction. A potential application of the model is illustrated with simulations of tree fruit production following flood-control implementation and lumbering. ?? 1979.

  11. An experimental and numerical investigation of shock-wave induced turbulent boundary-layer separation at hypersonic speeds

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.; Horstman, C. C.; Rubesin, M. W.; Coakley, T. J.; Kussoy, M. I.

    1975-01-01

    An experiment designed to test and guide computations of the interaction of an impinging shock wave with a turbulent boundary layer is described. Detailed mean flow-field and surface data are presented for two shock strengths which resulted in attached and separated flows, respectively. Numerical computations, employing the complete time-averaged Navier-Stokes equations along with algebraic eddy-viscosity and turbulent Prandtl number models to describe shear stress and heat flux, are used to illustrate the dependence of the computations on the particulars of the turbulence models. Models appropriate for zero-pressure-gradient flows predicted the overall features of the flow fields, but were deficient in predicting many of the details of the interaction regions. Improvements to the turbulence model parameters were sought through a combination of detailed data analysis and computer simulations which tested the sensitivity of the solutions to model parameter changes. Computer simulations using these improvements are presented and discussed.

  12. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  13. Quantal Response: Estimation and Inference

    DTIC Science & Technology

    2014-09-01

    considered. The CI-based test is just another way of looking at the Wald test. A small-sample simulation illustrates aberrant behavior of the Wald/CI...asymptotic power computation (Eq. 36) exhibits this behavior but not to such an extent as the simulated small-sample power. Sample size is n = 11 and...as |m1−m0| increases, but the power of the Wald test actually decreases for large |m1−m0| and eventually π → α . This type of behavior was reported as

  14. Simulating smokers' acceptance of modifications in a cessation program.

    PubMed Central

    Spoth, R

    1992-01-01

    Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed. PMID:1738813

  15. Simulating smokers' acceptance of modifications in a cessation program.

    PubMed

    Spoth, R

    1992-01-01

    Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed.

  16. NAS: The first year

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Kutler, Paul

    1988-01-01

    Discussed are the capabilities of NASA's Numerical Aerodynamic Simulation (NAS) Program and its application as an advanced supercomputing system for computational fluid dynamics (CFD) research. First, the paper describes the NAS computational system, called the NAS Processing System Network, and the advanced computational capabilities it offers as a consequence of carrying out the NAS pathfinder objective. Second, it presents examples of pioneering CFD research accomplished during NAS's first operational year. Examples are included which illustrate CFD applications for predicting fluid phenomena, complementing and supplementing experimentation, and aiding in design. Finally, pacing elements and future directions for CFD and NAS are discussed.

  17. A Perspective on Computational Aerothermodynamics at NASA

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2007-01-01

    The evolving role of computational aerothermodynamics (CA) within NASA over the past 20 years is reviewed. The presentation highlights contributions to understanding the Space Shuttle pitching moment anomaly observed in the first shuttle flight, prediction of a static instability for Mars Pathfinder, and the use of CA for damage assessment in post-Columbia mission support. In the view forward, several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified.

  18. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Appendix A: ROBSIM user's guide

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelley, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.

    1986-01-01

    The purpose of the Robotics Simulation Program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotics systems. ROBSIM is program in FORTRAN 77 for use on a VAX 11/750 computer under the VMS operating system. This user's guide describes the capabilities of the ROBSIM programs, including the system definition function, the analysis tools function and the postprocessor function. The options a user may encounter with each of these executables are explained in detail and the different program prompts appearing to the user are included. Some useful suggestions concerning the appropriate answers to be given by the user are provided. An example user interactive run in enclosed for each of the main program services, and some of the capabilities are illustrated.

  19. Interactive visualization of Earth and Space Science computations

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.; Paul, Brian E.; Santek, David A.; Dyer, Charles R.; Battaiola, Andre L.; Voidrot-Martinez, Marie-Francoise

    1994-01-01

    Computers have become essential tools for scientists simulating and observing nature. Simulations are formulated as mathematical models but are implemented as computer algorithms to simulate complex events. Observations are also analyzed and understood in terms of mathematical models, but the number of these observations usually dictates that we automate analyses with computer algorithms. In spite of their essential role, computers are also barriers to scientific understanding. Unlike hand calculations, automated computations are invisible and, because of the enormous numbers of individual operations in automated computations, the relation between an algorithm's input and output is often not intuitive. This problem is illustrated by the behavior of meteorologists responsible for forecasting weather. Even in this age of computers, many meteorologists manually plot weather observations on maps, then draw isolines of temperature, pressure, and other fields by hand (special pads of maps are printed for just this purpose). Similarly, radiologists use computers to collect medical data but are notoriously reluctant to apply image-processing algorithms to that data. To these scientists with life-and-death responsibilities, computer algorithms are black boxes that increase rather than reduce risk. The barrier between scientists and their computations can be bridged by techniques that make the internal workings of algorithms visible and that allow scientists to experiment with their computations. Here we describe two interactive systems developed at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC) that provide these capabilities to Earth and space scientists.

  20. Integration of Molecular Dynamics Based Predictions into the Optimization of De Novo Protein Designs: Limitations and Benefits.

    PubMed

    Carvalho, Henrique F; Barbosa, Arménio J M; Roque, Ana C A; Iranzo, Olga; Branco, Ricardo J F

    2017-01-01

    Recent advances in de novo protein design have gained considerable insight from the intrinsic dynamics of proteins, based on the integration of molecular dynamics simulations protocols on the state-of-the-art de novo protein design protocols used nowadays. With this protocol we illustrate how to set up and run a molecular dynamics simulation followed by a functional protein dynamics analysis. New users will be introduced to some useful open-source computational tools, including the GROMACS molecular dynamics simulation software package and ProDy for protein structural dynamics analysis.

  1. Reaction-mediated entropic effect on phase separation in a binary polymer system

    NASA Astrophysics Data System (ADS)

    Sun, Shujun; Guo, Miaocai; Yi, Xiaosu; Zhang, Zuoguang

    2017-10-01

    We present a computer simulation to study the phase separation behavior induced by polymerization in a binary system comprising polymer chains and reactive monomers. We examined the influence of interaction parameter between components and monomer concentration on the reaction-induced phase separation. The simulation results demonstrate that increasing interaction parameter (enthalpic effect) would accelerate phase separation, while entropic effect plays a key role in the process of phase separation. Furthermore, scanning electron microscopy observations illustrate identical morphologies as found in theoretical simulation. This study may enrich our comprehension of phase separation in polymer mixture.

  2. Ultrafast electron diffraction pattern simulations using GPU technology. Applications to lattice vibrations.

    PubMed

    Eggeman, A S; London, A; Midgley, P A

    2013-11-01

    Graphical processing units (GPUs) offer a cost-effective and powerful means to enhance the processing power of computers. Here we show how GPUs can greatly increase the speed of electron diffraction pattern simulations by the implementation of a novel method to generate the phase grating used in multislice calculations. The increase in speed is especially apparent when using large supercell arrays and we illustrate the benefits of fast encoding the transmission function representing the atomic potentials through the simulation of thermal diffuse scattering in silicon brought about by specific vibrational modes. © 2013 Elsevier B.V. All rights reserved.

  3. An expert system for municipal solid waste management simulation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, M.C.; Chang, N.B.

    1996-12-31

    Optimization techniques were usually used to model the complicated metropolitan solid waste management system to search for the best dynamic combination of waste recycling, facility siting, and system operation, where sophisticated and well-defined interrelationship are required in the modeling process. But this paper applied the Concurrent Object-Oriented Simulation (COOS), a new simulation software construction method, to bridge the gap between the physical system and its computer representation. The case study of Kaohsiung solid waste management system in Taiwan is prepared for the illustration of the analytical methodology of COOS and its implementation in the creation of an expert system.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeure, I.M.

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less

  5. Simulating initial attack with two fire containment models

    Treesearch

    Romain M. Mees

    1985-01-01

    Given a variable rate of fireline construction and an elliptical fire growth model, two methods for estimating the required number of resources, time to containment, and the resulting fire area were compared. Five examples illustrate some of the computational differences between the simple and the complex methods. The equations for the two methods can be used and...

  6. Trajectory of Charged Particle in Combined Electric and Magnetic Fields Using Interactive Spreadsheets

    ERIC Educational Resources Information Center

    Tambade, Popat S.

    2011-01-01

    The objective of this article is to graphically illustrate to the students the physical phenomenon of motion of charged particle under the action of simultaneous electric and magnetic fields by simulating particle motion on a computer. Differential equations of motions are solved analytically and path of particle in three-dimensional space are…

  7. Understanding the Theory and Practice of Molecular Spectroscopy: The Effects of Spectral Bandwidth

    ERIC Educational Resources Information Center

    Hirayama, Satoshi; Steer, Ronald P.

    2010-01-01

    The near-UV spectrum of benzene is used to illustrate the effects of variations in instrument spectral bandwidth on absorbance and molar absorptivity measurements and on the independence of values of quantities such as the oscillator strength that are based on integrated absorptivity. Excel-based computer simulations are provided that help develop…

  8. Boltzmann's "H"-Theorem and the Assumption of Molecular Chaos

    ERIC Educational Resources Information Center

    Boozer, A. D.

    2011-01-01

    We describe a simple dynamical model of a one-dimensional ideal gas and use computer simulations of the model to illustrate two fundamental results of kinetic theory: the Boltzmann transport equation and the Boltzmann "H"-theorem. Although the model is time-reversal invariant, both results predict that the behaviour of the gas is time-asymmetric.…

  9. Engaging Undergraduate Math Majors in Geoscience Research using Interactive Simulations and Computer Art

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Hymiak, B.; Reslink, C. F.; Baxter, C.; Aziz, S.

    2012-12-01

    As part of the NSF-sponsored 'URGE (Undergraduate Research Group Experiences) to Compute' program, Dr. Matott has been collaborating with talented Math majors to explore the design of cost-effective systems to safeguard groundwater supplies from contaminated sites. Such activity is aided by a combination of groundwater modeling, simulation-based optimization, and high-performance computing - disciplines largely unfamiliar to the students at the outset of the program. To help train and engage the students, a number of interactive and graphical software packages were utilized. Examples include: (1) a tutorial for exploring the behavior of evolutionary algorithms and other heuristic optimizers commonly used in simulation-based optimization; (2) an interactive groundwater modeling package for exploring alternative pump-and-treat containment scenarios at a contaminated site in Billings, Montana; (3) the R software package for visualizing various concepts related to subsurface hydrology; and (4) a job visualization tool for exploring the behavior of numerical experiments run on a large distributed computing cluster. Further engagement and excitement in the program was fostered by entering (and winning) a computer art competition run by the Coalition for Academic Scientific Computation (CASC). The winning submission visualizes an exhaustively mapped optimization cost surface and dramatically illustrates the phenomena of artificial minima - valley locations that correspond to designs whose costs are only partially optimal.

  10. A brief overview of computational structures technology related activities at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.

    1992-01-01

    The presentation gives a partial overview of research and development underway in the Structures Division of LeRC, which collectively is referred to as the Computational Structures Technology Program. The activities in the program are diverse and encompass four major categories: (1) composite materials and structures; (2) probabilistic analysis and reliability; (3) design optimization and expert systems; and (4) computational methods and simulation. The approach of the program is comprehensive and entails exploration of fundamental theories of structural mechanics to accurately represent the complex physics governing engine structural performance, formulation, and implementation of computational techniques and integrated simulation strategies to provide accurate and efficient solutions of the governing theoretical models by exploiting the emerging advances in computer technology, and validation and verification through numerical and experimental tests to establish confidence and define the qualities and limitations of the resulting theoretical models and computational solutions. The program comprises both in-house and sponsored research activities. The remainder of the presentation provides a sample of activities to illustrate the breadth and depth of the program and to demonstrate the accomplishments and benefits that have resulted.

  11. Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code

    NASA Astrophysics Data System (ADS)

    Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.

    2015-08-01

    MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.

  12. GPU-accelerated computational tool for studying the effectiveness of asteroid disruption techniques

    NASA Astrophysics Data System (ADS)

    Zimmerman, Ben J.; Wie, Bong

    2016-10-01

    This paper presents the development of a new Graphics Processing Unit (GPU) accelerated computational tool for asteroid disruption techniques. Numerical simulations are completed using the high-order spectral difference (SD) method. Due to the compact nature of the SD method, it is well suited for implementation with the GPU architecture, hence solutions are generated at orders of magnitude faster than the Central Processing Unit (CPU) counterpart. A multiphase model integrated with the SD method is introduced, and several asteroid disruption simulations are conducted, including kinetic-energy impactors, multi-kinetic energy impactor systems, and nuclear options. Results illustrate the benefits of using multi-kinetic energy impactor systems when compared to a single impactor system. In addition, the effectiveness of nuclear options is observed.

  13. Computable general equilibrium model fiscal year 2013 capability development report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Brian Keith; Rivera, Michael Kelly; Boero, Riccardo

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences inmore » the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.« less

  14. A brief history of the introduction of generalized ensembles to Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.

    2017-03-01

    The most efficient weights for Markov chain Monte Carlo calculations of physical observables are not necessarily those of the canonical ensemble. Generalized ensembles, which do not exist in nature but can be simulated on computers, lead often to a much faster convergence. In particular, they have been used for simulations of first order phase transitions and for simulations of complex systems in which conflicting constraints lead to a rugged free energy landscape. Starting off with the Metropolis algorithm and Hastings' extension, I present a minireview which focuses on the explosive use of generalized ensembles in the early 1990s. Illustrations are given, which range from spin models to peptides.

  15. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  16. Computational Experiments for Science and Engineering Education

    NASA Technical Reports Server (NTRS)

    Xie, Charles

    2011-01-01

    How to integrate simulation-based engineering and science (SBES) into the science curriculum smoothly is a challenging question. For the importance of SBES to be appreciated, the core value of simulations-that they help people understand natural phenomena and solve engineering problems-must be taught. A strategy to achieve this goal is to introduce computational experiments to the science curriculum to replace or supplement textbook illustrations and exercises and to complement or frame hands-on or wet lab experiments. In this way, students will have an opportunity to learn about SBES without compromising other learning goals required by the standards and teachers will welcome these tools as they strengthen what they are already teaching. This paper demonstrates this idea using a number of examples in physics, chemistry, and engineering. These exemplary computational experiments show that it is possible to create a curriculum that is both deeper and wider.

  17. Computational Combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westbrook, C K; Mizobuchi, Y; Poinsot, T J

    2004-08-26

    Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surfacemore » and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.« less

  18. Equation-free multiscale computation: algorithms and applications.

    PubMed

    Kevrekidis, Ioannis G; Samaey, Giovanni

    2009-01-01

    In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.

  19. Pushing the frontiers of first-principles based computer simulations of chemical and biological systems.

    PubMed

    Brunk, Elizabeth; Ashari, Negar; Athri, Prashanth; Campomanes, Pablo; de Carvalho, F Franco; Curchod, Basile F E; Diamantis, Polydefkis; Doemer, Manuel; Garrec, Julian; Laktionov, Andrey; Micciarelli, Marco; Neri, Marilisa; Palermo, Giulia; Penfold, Thomas J; Vanni, Stefano; Tavernelli, Ivano; Rothlisberger, Ursula

    2011-01-01

    The Laboratory of Computational Chemistry and Biochemistry is active in the development and application of first-principles based simulations of complex chemical and biochemical phenomena. Here, we review some of our recent efforts in extending these methods to larger systems, longer time scales and increased accuracies. Their versatility is illustrated with a diverse range of applications, ranging from the determination of the gas phase structure of the cyclic decapeptide gramicidin S, to the study of G protein coupled receptors, the interaction of transition metal based anti-cancer agents with protein targets, the mechanism of action of DNA repair enzymes, the role of metal ions in neurodegenerative diseases and the computational design of dye-sensitized solar cells. Many of these projects are done in collaboration with experimental groups from the Institute of Chemical Sciences and Engineering (ISIC) at the EPFL.

  20. A potential-energy scaling model to simulate the initial stages of thin-film growth

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.; Outlaw, R. A.; Walker, G. H.

    1983-01-01

    A solid on solid (SOS) Monte Carlo computer simulation employing a potential energy scaling technique was used to model the initial stages of thin film growth. The model monitors variations in the vertical interaction potential that occur due to the arrival or departure of selected adatoms or impurities at all sites in the 400 sq. ft. array. Boltzmann ordered statistics are used to simulate fluctuations in vibrational energy at each site in the array, and the resulting site energy is compared with threshold levels of possible atomic events. In addition to adsorption, desorption, and surface migration, adatom incorporation and diffusion of a substrate atom to the surface are also included. The lateral interaction of nearest, second nearest, and third nearest neighbors is also considered. A series of computer experiments are conducted to illustrate the behavior of the model.

  1. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  2. Computer software tool REALM for sustainable water allocation and management.

    PubMed

    Perera, B J C; James, B; Kularathna, M D U

    2005-12-01

    REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.

  3. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  4. [Computer simulation of a clinical magnet resonance tomography scanner for training purposes].

    PubMed

    Hackländer, T; Mertens, H; Cramer, B M

    2004-08-01

    The idea for this project was born by the necessity to offer medical students an easy approach to the theoretical basics of magnetic resonance imaging. The aim was to simulate the features and functions of such a scanner on a commercially available computer by means of a computer program. The simulation was programmed in pure Java under the GNU General Public License and is freely available for a commercially available computer with Windows, Macintosh or Linux operating system. The graphic user interface is oriented to a real scanner. In an external program parameter, images for the proton density and the relaxation times T1 and T2 are calculated on the basis of clinical examinations. From this, the image calculation is carried out in the simulation program pixel by pixel on the basis of a pulse sequence chosen and modified by the user. The images can be stored and printed. In addition, it is possible to display and modify k-space images. Seven classes of pulse sequences are implemented and up to 14 relevant sequence parameters, such as repetition time and echo time, can be altered. Aliasing and motion artifacts can be simulated. As the image calculation only takes a few seconds, interactive working is possible. The simulation has been used in the university education for more than 1 year, successfully illustrating the dependence of the MR images on the measuring parameters. This should facititate the approach of students to the understanding MR imaging in the future.

  5. Probabilistic Simulation of Multi-Scale Composite Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  6. A New Streamflow-Routing (SFR1) Package to Simulate Stream-Aquifer Interaction with MODFLOW-2000

    USGS Publications Warehouse

    Prudic, David E.; Konikow, Leonard F.; Banta, Edward R.

    2004-01-01

    The increasing concern for water and its quality require improved methods to evaluate the interaction between streams and aquifers and the strong influence that streams can have on the flow and transport of contaminants through many aquifers. For this reason, a new Streamflow-Routing (SFR1) Package was written for use with the U.S. Geological Survey's MODFLOW-2000 ground-water flow model. The SFR1 Package is linked to the Lake (LAK3) Package, and both have been integrated with the Ground-Water Transport (GWT) Process of MODFLOW-2000 (MODFLOW-GWT). SFR1 replaces the previous Stream (STR1) Package, with the most important difference being that stream depth is computed at the midpoint of each reach instead of at the beginning of each reach, as was done in the original Stream Package. This approach allows for the addition and subtraction of water from runoff, precipitation, and evapotranspiration within each reach. Because the SFR1 Package computes stream depth differently than that for the original package, a different name was used to distinguish it from the original Stream (STR1) Package. The SFR1 Package has five options for simulating stream depth and four options for computing diversions from a stream. The options for computing stream depth are: a specified value; Manning's equation (using a wide rectangular channel or an eight-point cross section); a power equation; or a table of values that relate flow to depth and width. Each stream segment can have a different option. Outflow from lakes can be computed using the same options. Because the wetted perimeter is computed for the eight-point cross section and width is computed for the power equation and table of values, the streambed conductance term no longer needs to be calculated externally whenever the area of streambed changes as a function of flow. The concentration of solute is computed in a stream network when MODFLOW-GWT is used in conjunction with the SFR1 Package. The concentration of a solute in a stream reach is based on a mass-balance approach and accounts for exchanges with (inputs from or losses to) ground-water systems. Two test examples are used to illustrate some of the capabilities of the SFR1 Package. The first test simulation was designed to illustrate how pumping of ground water from an aquifer connected to streams can affect streamflow, depth, width, and streambed conductance using the different options. The second test simulation was designed to illustrate solute transport through interconnected lakes, streams, and aquifers. Because of the need to examine time series results from the model simulations, the Gage Package first described in the LAK3 documentation was revised to include time series results of selected variables (streamflows, stream depth and width, streambed conductance, solute concentrations, and solute loads) for specified stream reaches. The mass-balance or continuity approach for routing flow and solutes through a stream network may not be applicable for all interactions between streams and aquifers. The SFR1 Package is best suited for modeling long-term changes (months to hundreds of years) in ground-water flow and solute concentrations using averaged flows in streams. The Package is not recommended for modeling the transient exchange of water between streams and aquifers when the objective is to examine short-term (minutes to days) effects caused by rapidly changing streamflows.

  7. Cognitive simulation as a tool for cognitive task analysis.

    PubMed

    Roth, E M; Woods, D D; Pople, H E

    1992-10-01

    Cognitive simulations are runnable computer programs that represent models of human cognitive activities. We show how one cognitive simulation built as a model of some of the cognitive processes involved in dynamic fault management can be used in conjunction with small-scale empirical data on human performance to uncover the cognitive demands of a task, to identify where intention errors are likely to occur, and to point to improvements in the person-machine system. The simulation, called Cognitive Environment Simulation or CES, has been exercised on several nuclear power plant accident scenarios. Here we report one case to illustrate how a cognitive simulation tool such as CES can be used to clarify the cognitive demands of a problem-solving situation as part of a cognitive task analysis.

  8. Nexus: A modular workflow management system for quantum simulation codes

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  9. [Clinical and communication simulation workshop for fellows in gastroenterology: the trainees' perspective].

    PubMed

    Lang, Alon; Melzer, Ehud; Bar-Meir, Simon; Eliakim, Rami; Ziv, Amitai

    2006-11-01

    The continuing development in computer-based medical simulators provides an ideal platform for simulator-assisted training programs for medical trainees. Computer-based endoscopic simulators provide a virtual reality environment for training endoscopic procedures. This study illustrates the use of a comprehensive training model combining the use of endoscopic simulators with simulated (actor) patients (SP). To evaluate the effectiveness of a comprehensive simulation workshop from the trainee perspective. Four case studies were developed with emphasis on communication skills. Three workshops with 10 fellows in each were conducted. During each workshop the trainees spent half of the time in SP case studies and the remaining half working with computerized endoscopic simulators with continuous guidance by an expert endoscopist. Questionnaires were completed by the fellows at the end of the workshop. Seventy percent of the fellows felt that the endoscopic simulator was close or very close to reality for gastroscopy and 63% for colonoscopy. Eighty eight percent thought the close guidance was important for the learning process with the simulator. Eighty percent felt that the case studies were an important learning experience for risk management. Further evaluation of multi-modality simulation workshops in gastroenterologist training is needed to identify how best to incorporate this form of instruction into training for gastroenterologists.

  10. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  11. An investigation of a mathematical model for atmospheric absorption spectra

    NASA Technical Reports Server (NTRS)

    Niple, E. R.

    1979-01-01

    A computer program that calculates absorption spectra for slant paths through the atmosphere is described. The program uses an efficient convolution technique (Romberg integration) to simulate instrument resolution effects. A brief information analysis is performed on a set of calculated spectra to illustrate how such techniques may be used to explore the quality of the information in a spectrum.

  12. Enhanced Teaching and Student Learning through a Simulator-Based Course in Chemical Unit Operations Design

    ERIC Educational Resources Information Center

    Ghasem, Nayef

    2016-01-01

    This paper illustrates a teaching technique used in computer applications in chemical engineering employed for designing various unit operation processes, where the students learn about unit operations by designing them. The aim of the course is not to teach design, but rather to teach the fundamentals and the function of unit operation processes…

  13. Hierarchy of simulation models for a turbofan gas engine

    NASA Technical Reports Server (NTRS)

    Longenbaker, W. E.; Leake, R. J.

    1977-01-01

    Steady-state and transient performance of an F-100-like turbofan gas engine are modeled by a computer program, DYNGEN, developed by NASA. The model employs block data maps and includes about 25 states. Low-order nonlinear analytical and linear techniques are described in terms of their application to the model. Experimental comparisons illustrating the accuracy of each model are presented.

  14. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  15. Numerical simulation of a hovering rotor using embedded grids

    NASA Technical Reports Server (NTRS)

    Duque, Earl-Peter N.; Srinivasan, Ganapathi R.

    1992-01-01

    The flow field for a rotor blade in hover was computed by numerically solving the compressible thin-layer Navier-Stokes equations on embedded grids. In this work, three embedded grids were used to discretize the flow field - one for the rotor blade and two to convect the rotor wake. The computations were performed at two hovering test conditions, for a two-bladed rectangular rotor of aspect ratio six. The results compare fairly with experiment and illustrates the use of embedded grids in solving helicopter type flow fields.

  16. NASA Aeronautics: Research and Technology Program Highlights

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This report contains numerous color illustrations to describe the NASA programs in aeronautics. The basic ideas involved are explained in brief paragraphs. The seven chapters deal with Subsonic aircraft, High-speed transport, High-performance military aircraft, Hypersonic/Transatmospheric vehicles, Critical disciplines, National facilities and Organizations & installations. Some individual aircraft discussed are : the SR-71 aircraft, aerospace planes, the high-speed civil transport (HSCT), the X-29 forward-swept wing research aircraft, and the X-31 aircraft. Critical disciplines discussed are numerical aerodynamic simulation, computational fluid dynamics, computational structural dynamics and new experimental testing techniques.

  17. Probabilistic simulation of uncertainties in composite uniaxial strengths

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Stock, T. A.

    1990-01-01

    Probabilistic composite micromechanics methods are developed that simulate uncertainties in unidirectional fiber composite strengths. These methods are in the form of computational procedures using composite mechanics with Monte Carlo simulation. The variables for which uncertainties are accounted include constituent strengths and their respective scatter. A graphite/epoxy unidirectional composite (ply) is studied to illustrate the procedure and its effectiveness to formally estimate the probable scatter in the composite uniaxial strengths. The results show that ply longitudinal tensile and compressive, transverse compressive and intralaminar shear strengths are not sensitive to single fiber anomalies (breaks, intergacial disbonds, matrix microcracks); however, the ply transverse tensile strength is.

  18. Providing scalable system software for high-end simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, D.

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  19. Simulation of wave propagation inside a human eye: acoustic eye model (AEM)

    NASA Astrophysics Data System (ADS)

    Požar, T.; Halilovič, M.; Horvat, D.; Petkovšek, R.

    2018-02-01

    The design and development of the acoustic eye model (AEM) is reported. The model consists of a computer-based simulation that describes the propagation of mechanical disturbance inside a simplified model of a human eye. The capabilities of the model are illustrated with examples, using different laser-induced initial loading conditions in different geometrical configurations typically occurring in ophthalmic medical procedures. The potential of the AEM is to predict the mechanical response of the treated eye tissue in advance, thus complementing other preliminary procedures preceding medical treatments.

  20. Temperature for a dynamic spin ensemble

    NASA Astrophysics Data System (ADS)

    Ma, Pui-Wai; Dudarev, S. L.; Semenov, A. A.; Woo, C. H.

    2010-09-01

    In molecular dynamics simulations, temperature is evaluated, via the equipartition principle, by computing the mean kinetic energy of atoms. There is no similar recipe yet for evaluating temperature of a dynamic system of interacting spins. By solving semiclassical Langevin spin-dynamics equations, and applying the fluctuation-dissipation theorem, we derive an equation for the temperature of a spin ensemble, expressed in terms of dynamic spin variables. The fact that definitions for the kinetic and spin temperatures are fully consistent is illustrated using large-scale spin dynamics and spin-lattice dynamics simulations.

  1. Advances in the computation of transonic separated flows over finite wings

    NASA Technical Reports Server (NTRS)

    Kaynak, Unver; Flores, Jolen

    1989-01-01

    Problems encountered in numerical simulations of transonic wind-tunnel experiments with low-aspect-ratio wings are surveyed and illustrated. The focus is on the zonal Euler/Navier-Stokes program developed by Holst et al. (1985) and its application to shock-induced separation. The physical basis and numerical implementation of the method are reviewed, and results are presented from studies of the effects of artificial dissipation, boundary conditions, grid refinement, the turbulence model, and geometry representation on the simulation accuracy. Extensive graphs and diagrams and typical flow visualizations are provided.

  2. Marketing percolation

    NASA Astrophysics Data System (ADS)

    Goldenberg, J.; Libai, B.; Solomon, S.; Jan, N.; Stauffer, D.

    2000-09-01

    A percolation model is presented, with computer simulations for illustrations, to show how the sales of a new product may penetrate the consumer market. We review the traditional approach in the marketing literature, which is based on differential or difference equations similar to the logistic equation (Bass, Manage. Sci. 15 (1969) 215). This mean-field approach is contrasted with the discrete percolation on a lattice, with simulations of "social percolation" (Solomon et al., Physica A 277 (2000) 239) in two to five dimensions giving power laws instead of exponential growth, and strong fluctuations right at the percolation threshold.

  3. Tetrahedral and polyhedral mesh evaluation for cerebral hemodynamic simulation--a comparison.

    PubMed

    Spiegel, Martin; Redel, Thomas; Zhang, Y; Struffert, Tobias; Hornegger, Joachim; Grossman, Robert G; Doerfler, Arnd; Karmonik, Christof

    2009-01-01

    Computational fluid dynamic (CFD) based on patient-specific medical imaging data has found widespread use for visualizing and quantifying hemodynamics in cerebrovascular disease such as cerebral aneurysms or stenotic vessels. This paper focuses on optimizing mesh parameters for CFD simulation of cerebral aneurysms. Valid blood flow simulations strongly depend on the mesh quality. Meshes with a coarse spatial resolution may lead to an inaccurate flow pattern. Meshes with a large number of elements will result in unnecessarily high computation time which is undesirable should CFD be used for planning in the interventional setting. Most CFD simulations reported for these vascular pathologies have used tetrahedral meshes. We illustrate the use of polyhedral volume elements in comparison to tetrahedral meshing on two different geometries, a sidewall aneurysm of the internal carotid artery and a basilar bifurcation aneurysm. The spatial mesh resolution ranges between 5,119 and 228,118 volume elements. The evaluation of the different meshes was based on the wall shear stress previously identified as a one possible parameter for assessing aneurysm growth. Polyhedral meshes showed better accuracy, lower memory demand, shorter computational speed and faster convergence behavior (on average 369 iterations less).

  4. Interpretive computer simulator for the NASA Standard Spacecraft Computer-2 (NSSC-2)

    NASA Technical Reports Server (NTRS)

    Smith, R. S.; Noland, M. S.

    1979-01-01

    An Interpretive Computer Simulator (ICS) for the NASA Standard Spacecraft Computer-II (NSSC-II) was developed as a code verification and testing tool for the Annular Suspension and Pointing System (ASPS) project. The simulator is written in the higher level language PASCAL and implented on the CDC CYBER series computer system. It is supported by a metal assembler, a linkage loader for the NSSC-II, and a utility library to meet the application requirements. The architectural design of the NSSC-II is that of an IBM System/360 (S/360) and supports all but four instructions of the S/360 standard instruction set. The structural design of the ICS is described with emphasis on the design differences between it and the NSSC-II hardware. The program flow is diagrammed, with the function of each procedure being defined; the instruction implementation is discussed in broad terms; and the instruction timings used in the ICS are listed. An example of the steps required to process an assembly level language program on the ICS is included. The example illustrates the control cards necessary to assemble, load, and execute assembly language code; the sample program to to be executed; the executable load module produced by the loader; and the resulting output produced by the ICS.

  5. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  6. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  7. Use of three-dimensional computer graphic animation to illustrate cleft lip and palate surgery.

    PubMed

    Cutting, C; Oliker, A; Haring, J; Dayan, J; Smith, D

    2002-01-01

    Three-dimensional (3D) computer animation is not commonly used to illustrate surgical techniques. This article describes the surgery-specific processes that were required to produce animations to teach cleft lip and palate surgery. Three-dimensional models were created using CT scans of two Chinese children with unrepaired clefts (one unilateral and one bilateral). We programmed several custom software tools, including an incision tool, a forceps tool, and a fat tool. Three-dimensional animation was found to be particularly useful for illustrating surgical concepts. Positioning the virtual "camera" made it possible to view the anatomy from angles that are impossible to obtain with a real camera. Transparency allows the underlying anatomy to be seen during surgical repair while maintaining a view of the overlaying tissue relationships. Finally, the representation of motion allows modeling of anatomical mechanics that cannot be done with static illustrations. The animations presented in this article can be viewed on-line at http://www.smiletrain.org/programs/virtual_surgery2.htm. Sophisticated surgical procedures are clarified with the use of 3D animation software and customized software tools. The next step in the development of this technology is the creation of interactive simulators that recreate the experience of surgery in a safe, digital environment. Copyright 2003 Wiley-Liss, Inc.

  8. Theory and Simulation of Multicomponent Osmotic Systems

    PubMed Central

    Karunaweera, Sadish; Gee, Moon Bae; Weerasinghe, Samantha; Smith, Paul E.

    2012-01-01

    Most cellular processes occur in systems containing a variety of components many of which are open to material exchange. However, computer simulations of biological systems are almost exclusively performed in systems closed to material exchange. In principle, the behavior of biomolecules in open and closed systems will be different. Here, we provide a rigorous framework for the analysis of experimental and simulation data concerning open and closed multicomponent systems using the Kirkwood-Buff (KB) theory of solutions. The results are illustrated using computer simulations for various concentrations of the solutes Gly, Gly2 and Gly3 in both open and closed systems, and in the absence or presence of NaCl as a cosolvent. In addition, KB theory is used to help rationalize the aggregation properties of the solutes. Here one observes that the picture of solute association described by the KB integrals, which are directly related to the solution thermodynamics, and that provided by more physical clustering approaches are different. It is argued that the combination of KB theory and simulation data provides a simple and powerful tool for the analysis of complex multicomponent open and closed systems. PMID:23329894

  9. Down to the roughness scale assessment of piston-ring/liner contacts

    NASA Astrophysics Data System (ADS)

    Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.

    2017-02-01

    The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.

  10. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE PAGES

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...

    2017-11-07

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  11. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  12. Hierarchical nonlinear behavior of hot composite structures

    NASA Technical Reports Server (NTRS)

    Murthy, P. L. N.; Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Hierarchical computational procedures are described to simulate the multiple scale thermal/mechanical behavior of high temperature metal matrix composites (HT-MMC) in the following three broad areas: (1) behavior of HT-MMC's from micromechanics to laminate via METCAN (Metal Matrix Composite Analyzer), (2) tailoring of HT-MMC behavior for optimum specific performance via MMLT (Metal Matrix Laminate Tailoring), and (3) HT-MMC structural response for hot structural components via HITCAN (High Temperature Composite Analyzer). Representative results from each area are presented to illustrate the effectiveness of computational simulation procedures and accompanying computer codes. The sample case results show that METCAN can be used to simulate material behavior such as the entire creep span; MMLT can be used to concurrently tailor the fabrication process and the interphase layer for optimum performance such as minimum residual stresses; and HITCAN can be used to predict the structural behavior such as the deformed shape due to component fabrication. These codes constitute virtual portable desk-top test laboratories for characterizing HT-MMC laminates, tailoring the fabrication process, and qualifying structural components made from them.

  13. A software tool for modeling and simulation of numerical P systems.

    PubMed

    Buiu, Catalin; Arsene, Octavian; Cipu, Corina; Patrascu, Monica

    2011-03-01

    A P system represents a distributed and parallel bio-inspired computing model in which basic data structures are multi-sets or strings. Numerical P systems have been recently introduced and they use numerical variables and local programs (or evolution rules), usually in a deterministic way. They may find interesting applications in areas such as computational biology, process control or robotics. The first simulator of numerical P systems (SNUPS) has been designed, implemented and made available to the scientific community by the authors of this paper. SNUPS allows a wide range of applications, from modeling and simulation of ordinary differential equations, to the use of membrane systems as computational blocks of cognitive architectures, and as controllers for autonomous mobile robots. This paper describes the functioning of a numerical P system and presents an overview of SNUPS capabilities together with an illustrative example. SNUPS is freely available to researchers as a standalone application and may be downloaded from a dedicated website, http://snups.ics.pub.ro/, which includes an user manual and sample membrane structures. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William

    1986-01-01

    The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.

  15. Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds

    NASA Astrophysics Data System (ADS)

    Wesemann, Stefan; Marzetta, Thomas L.

    2017-12-01

    For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.

  16. Wind-tunnel based definition of the AFE aerothermodynamic environment. [Aeroassist Flight Experiment

    NASA Technical Reports Server (NTRS)

    Miller, Charles G.; Wells, W. L.

    1992-01-01

    The Aeroassist Flight Experiment (AFE), scheduled to be performed in 1994, will serve as a precursor for aeroassisted space transfer vehicles (ASTV's) and is representative of entry concepts being considered for missions to Mars. Rationale for the AFE is reviewed briefly as are the various experiments carried aboard the vehicle. The approach used to determine hypersonic aerodynamic and aerothermodynamic characteristics over a wide range of simulation parameters in ground-based facilities is presented. Facilities, instrumentation and test procedures employed in the establishment of the data base are discussed. Measurements illustrating the effects of hypersonic simulation parameters, particularly normal-shock density ratio (an important parameter for hypersonic blunt bodies), and attitude on aerodynamic and aerothermodynamic characteristics are presented, and predictions from computational fluid dynamic (CFD) computer codes are compared with measurement.

  17. TOPICAL REVIEW: Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.; Chan, V. S.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  18. Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  19. Use of a personal computer for dynamical engineering illustrations in a classroom and over an instructional TV network

    NASA Technical Reports Server (NTRS)

    Watson, V. R.

    1983-01-01

    A personal computer has been used to illustrate physical phenomena and problem solution techniques in engineering classes. According to student evaluations, instruction of concepts was greatly improved through the use of these illustrations. This paper describes the class of phenomena that can be effectively illustrated, the techniques used to create these illustrations, and the techniques used to display the illustrations in regular classrooms and over an instructional TV network. The features of a personal computer required to apply these techniques are listed. The capabilities of some present personal computers are discussed and a forecast of the capabilities of future personal computers is presented.

  20. Simulator for neural networks and action potentials.

    PubMed

    Baxter, Douglas A; Byrne, John H

    2007-01-01

    A key challenge for neuroinformatics is to devise methods for representing, accessing, and integrating vast amounts of diverse and complex data. A useful approach to represent and integrate complex data sets is to develop mathematical models [Arbib (The Handbook of Brain Theory and Neural Networks, pp. 741-745, 2003); Arbib and Grethe (Computing the Brain: A Guide to Neuroinformatics, 2001); Ascoli (Computational Neuroanatomy: Principles and Methods, 2002); Bower and Bolouri (Computational Modeling of Genetic and Biochemical Networks, 2001); Hines et al. (J. Comput. Neurosci. 17, 7-11, 2004); Shepherd et al. (Trends Neurosci. 21, 460-468, 1998); Sivakumaran et al. (Bioinformatics 19, 408-415, 2003); Smolen et al. (Neuron 26, 567-580, 2000); Vadigepalli et al. (OMICS 7, 235-252, 2003)]. Models of neural systems provide quantitative and modifiable frameworks for representing data and analyzing neural function. These models can be developed and solved using neurosimulators. One such neurosimulator is simulator for neural networks and action potentials (SNNAP) [Ziv (J. Neurophysiol. 71, 294-308, 1994)]. SNNAP is a versatile and user-friendly tool for developing and simulating models of neurons and neural networks. SNNAP simulates many features of neuronal function, including ionic currents and their modulation by intracellular ions and/or second messengers, and synaptic transmission and synaptic plasticity. SNNAP is written in Java and runs on most computers. Moreover, SNNAP provides a graphical user interface (GUI) and does not require programming skills. This chapter describes several capabilities of SNNAP and illustrates methods for simulating neurons and neural networks. SNNAP is available at http://snnap.uth.tmc.edu .

  1. Advanced Computation in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Tang, William

    2001-10-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  2. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Brian Keith; Rivera, Michael K.; Boero, Riccardo

    2014-04-01

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences inmore » the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.« less

  3. A Big Data and Learning Analytics Approach to Process-Level Feedback in Cognitive Simulations.

    PubMed

    Pecaric, Martin; Boutis, Kathy; Beckstead, Jason; Pusic, Martin

    2017-02-01

    Collecting and analyzing large amounts of process data for the purposes of education can be considered a big data/learning analytics (BD/LA) approach to improving learning. However, in the education of health care professionals, the application of BD/LA is limited to date. The authors discuss the potential advantages of the BD/LA approach for the process of learning via cognitive simulations. Using the lens of a cognitive model of radiograph interpretation with four phases (orientation, searching/scanning, feature detection, and decision making), they reanalyzed process data from a cognitive simulation of pediatric ankle radiography where 46 practitioners from three expertise levels classified 234 cases online. To illustrate the big data component, they highlight the data available in a digital environment (time-stamped, click-level process data). Learning analytics were illustrated using algorithmic computer-enabled approaches to process-level feedback.For each phase, the authors were able to identify examples of potentially useful BD/LA measures. For orientation, the trackable behavior of re-reviewing the clinical history was associated with increased diagnostic accuracy. For searching/scanning, evidence of skipping views was associated with an increased false-negative rate. For feature detection, heat maps overlaid on the radiograph can provide a metacognitive visualization of common novice errors. For decision making, the measured influence of sequence effects can reflect susceptibility to bias, whereas computer-generated path maps can provide insights into learners' diagnostic strategies.In conclusion, the augmented collection and dynamic analysis of learning process data within a cognitive simulation can improve feedback and prompt more precise reflection on a novice clinician's skill development.

  4. Characterization of the free-energy landscapes of proteins by NMR-guided metadynamics

    PubMed Central

    Granata, Daniele; Camilloni, Carlo; Vendruscolo, Michele; Laio, Alessandro

    2013-01-01

    The use of free-energy landscapes rationalizes a wide range of aspects of protein behavior by providing a clear illustration of the different states accessible to these molecules, as well as of their populations and pathways of interconversion. The determination of the free-energy landscapes of proteins by computational methods is, however, very challenging as it requires an extensive sampling of their conformational spaces. We describe here a technique to achieve this goal with relatively limited computational resources by incorporating nuclear magnetic resonance (NMR) chemical shifts as collective variables in metadynamics simulations. As in this approach the chemical shifts are not used as structural restraints, the resulting free-energy landscapes correspond to the force fields used in the simulations. We illustrate this approach in the case of the third Ig-binding domain of protein G from streptococcal bacteria (GB3). Our calculations reveal the existence of a folding intermediate of GB3 with nonnative structural elements. Furthermore, the availability of the free-energy landscape enables the folding mechanism of GB3 to be elucidated by analyzing the conformational ensembles corresponding to the native, intermediate, and unfolded states, as well as the transition states between them. Taken together, these results show that, by incorporating experimental data as collective variables in metadynamics simulations, it is possible to enhance the sampling efficiency by two or more orders of magnitude with respect to standard molecular dynamics simulations, and thus to estimate free-energy differences among the different states of a protein with a kBT accuracy by generating trajectories of just a few microseconds. PMID:23572592

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.

    The classical Nadaraya-Watson estimator is shown to solve a generic sensor fusion problem where the underlying sensor error densities are not known but a sample is available. By employing Haar kernels this estimator is shown to yield finite sample guarantees and also to be efficiently computable. Two simulation examples, and a robotics example involving the detection of a door using arrays of ultrasonic and infrared sensors, are presented to illustrate the performance.

  6. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  7. Design for inadvertent damage in composite laminates

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.; Chamis, Christos C.

    1992-01-01

    Simplified predictive methods and models to computationally simulate durability and damage in polymer matrix composite materials/structures are described. The models include (1) progressive fracture, (2) progressively damaged structural behavior, (3) progressive fracture in aggressive environments, (4) stress concentrations, and (5) impact resistance. Several examples are included to illustrate applications of the models and to identify significant parameters and sensitivities. Comparisons with limited experimental data are made.

  8. A Comparison of the CHILD and Landlab Computational Landscape Evolution Models and Examples of Best Practices in Numerical Modeling of Surface Processes

    NASA Astrophysics Data System (ADS)

    Gasparini, N. M.; Hobley, D. E. J.; Tucker, G. E.; Istanbulluoglu, E.; Adams, J. M.; Nudurupati, S. S.; Hutton, E. W. H.

    2014-12-01

    Computational models are important tools that can be used to quantitatively understand the evolution of real landscapes. Commonalities exist among most landscape evolution models, although they are also idiosyncratic, in that they are coded in different languages, require different input values, and are designed to tackle a unique set of questions. These differences can make applying a landscape evolution model challenging, especially for novice programmers. In this study, we compare and contrast two landscape evolution models that are designed to tackle similar questions, but the actual model designs are quite different. The first model, CHILD, is over a decade-old and is relatively well-tested, well-developed and well-used. It is coded in C++, operates on an irregular grid and was designed more with function rather than user-experience in mind. In contrast, the second model, Landlab, is relatively new and was designed to be accessible to a wide range of scientists, including those who have not previously used or developed a numerical model. Landlab is coded in Python, a relatively easy language for the non-proficient programmer, and has the ability to model landscapes described on both regular and irregular grids. We present landscape simulations from both modeling platforms. Our goal is to illustrate best practices for implementing a new process module in a landscape evolution model, and therefore the simulations are applicable regardless of the modeling platform. We contrast differences and highlight similarities between the use of the two models, including setting-up the model and input file for different evolutionary scenarios, computational time, and model output. Whenever possible, we compare model output with analytical solutions and illustrate the effects, or lack thereof, of a uniform vs. non-uniform grid. Our simulations focus on implementing a single process, including detachment-limited or transport-limited fluvial bedrock incision and linear or non-linear diffusion of material on hillslopes. We also illustrate the steps necessary to couple processes together, for example, detachment-limited fluvial bedrock incision with linear diffusion on hillslopes. Trade-offs exist between the two modeling platforms, and these are primarily in speed and ease-of-use.

  9. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less

  10. Nexus: a modular workflow management system for quantum simulation codes

    DOE PAGES

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less

  11. Using Python as a first programming environment for computational physics in developing countries

    NASA Astrophysics Data System (ADS)

    Akpojotor, Godfrey; Ehwerhemuepha, Louis; Echenim, Myron; Akpojotor, Famous

    2011-03-01

    Python unique features such its interpretative, multiplatform and object oriented nature as well as being a free and open source software creates the possibility that any user connected to the internet can download the entire package into any platform, install it and immediately begin to use it. Thus Python is gaining reputation as a preferred environment for introducing students and new beginners to programming. Therefore in Africa, the Python African Tour project has been launched and we are coordinating its use in computational science. We examine here the challenges and prospects of using Python for computational physics (CP) education in developing countries (DC). Then we present our project on using Python to simulate and aid the learning of laboratory experiments illustrated here by modeling of the simple pendulum and also to visualize phenomena in physics illustrated here by demonstrating the wave motion of a particle in a varying potential. This project which is to train both the teachers and our students on CP using Python can easily be adopted in other DC.

  12. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 3: Design philosophy and programming details

    USGS Publications Warehouse

    Torak, L.J.

    1993-01-01

    A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.

  13. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  14. FastBit: Interactively Searching Massive Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reducesmore » the response time and enables interactive exploration on terabytes of data.« less

  15. Time-accurate simulations of a shear layer forced at a single frequency

    NASA Technical Reports Server (NTRS)

    Claus, R. W.; Huang, P. G.; Macinnes, J. M.

    1988-01-01

    Calculations are presented for the forced shear layer studied experimentally by Oster and Wygnanski, and Weisbrot. Two different computational approaches are examined: Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES). The DNS approach solves the full three dimensional Navier-Stokes equations for a temporally evolving mixing layer, while the LES approach solves the two dimensional Navier-Stokes equations with a subgrid scale turbulence model. While the comparison between these calculations and experimental data was hampered by a lack of information on the inflow boundary conditions, the calculations are shown to qualitatively agree with several aspects of the experiment. The sensitivity of these calculations to factors such as mesh refinement and Reynolds number is illustrated.

  16. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  17. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  18. Enhancing Lecture Presentations in Introductory Biology with Computer-Based Multimedia.

    ERIC Educational Resources Information Center

    Fifield, Steve; Peifer, Rick

    1994-01-01

    Uses illustrations and text to discuss convenient ways to organize and present computer-based multimedia to students in lecture classes. Includes the following topics: (1) Effects of illustrations on learning; (2) Using computer-based illustrations in lecture; (3) MacPresents-Multimedia Presentation Software; (4) Advantages of computer-based…

  19. Real-Time Multiprocessor Programming Language (RTMPL) user's manual

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.

    1985-01-01

    A real-time multiprocessor programming language (RTMPL) has been developed to provide for high-order programming of real-time simulations on systems of distributed computers. RTMPL is a structured, engineering-oriented language. The RTMPL utility supports a variety of multiprocessor configurations and types by generating assembly language programs according to user-specified targeting information. Many programming functions are assumed by the utility (e.g., data transfer and scaling) to reduce the programming chore. This manual describes RTMPL from a user's viewpoint. Source generation, applications, utility operation, and utility output are detailed. An example simulation is generated to illustrate many RTMPL features.

  20. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  1. Dancing with Black Holes

    NASA Astrophysics Data System (ADS)

    Aarseth, S. J.

    2008-05-01

    We describe efforts over the last six years to implement regularization methods suitable for studying one or more interacting black holes by direct N-body simulations. Three different methods have been adapted to large-N systems: (i) Time-Transformed Leapfrog, (ii) Wheel-Spoke, and (iii) Algorithmic Regularization. These methods have been tried out with some success on GRAPE-type computers. Special emphasis has also been devoted to including post-Newtonian terms, with application to moderately massive black holes in stellar clusters. Some examples of simulations leading to coalescence by gravitational radiation will be presented to illustrate the practical usefulness of such methods.

  2. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  3. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  4. Multiscale Mechanics of Articular Cartilage: Potentials and Challenges of Coupling Musculoskeletal, Joint, and Microscale Computational Models

    PubMed Central

    Halloran, J. P.; Sibole, S.; van Donkelaar, C. C.; van Turnhout, M. C.; Oomens, C. W. J.; Weiss, J. A.; Guilak, F.; Erdemir, A.

    2012-01-01

    Articular cartilage experiences significant mechanical loads during daily activities. Healthy cartilage provides the capacity for load bearing and regulates the mechanobiological processes for tissue development, maintenance, and repair. Experimental studies at multiple scales have provided a fundamental understanding of macroscopic mechanical function, evaluation of the micromechanical environment of chondrocytes, and the foundations for mechanobiological response. In addition, computational models of cartilage have offered a concise description of experimental data at many spatial levels under healthy and diseased conditions, and have served to generate hypotheses for the mechanical and biological function. Further, modeling and simulation provides a platform for predictive risk assessment, management of dysfunction, as well as a means to relate multiple spatial scales. Simulation-based investigation of cartilage comes with many challenges including both the computational burden and often insufficient availability of data for model development and validation. This review outlines recent modeling and simulation approaches to understand cartilage function from a mechanical systems perspective, and illustrates pathways to associate mechanics with biological function. Computational representations at single scales are provided from the body down to the microstructure, along with attempts to explore multiscale mechanisms of load sharing that dictate the mechanical environment of the cartilage and chondrocytes. PMID:22648577

  5. Computational neurobiology is a useful tool in translational neurology: the example of ataxia

    PubMed Central

    Brown, Sherry-Ann; McCullough, Louise D.; Loew, Leslie M.

    2014-01-01

    Hereditary ataxia, or motor incoordination, affects approximately 150,000 Americans and hundreds of thousands of individuals worldwide with onset from as early as mid-childhood. Affected individuals exhibit dysarthria, dysmetria, action tremor, and diadochokinesia. In this review, we consider an array of computational studies derived from experimental observations relevant to human neuropathology. A survey of related studies illustrates the impact of integrating clinical evidence with data from mouse models and computational simulations. Results from these studies may help explain findings in mice, and after extensive laboratory study, may ultimately be translated to ataxic individuals. This inquiry lays a foundation for using computation to understand neurobiochemical and electrophysiological pathophysiology of spinocerebellar ataxias and may contribute to development of therapeutics. The interdisciplinary analysis suggests that computational neurobiology can be an important tool for translational neurology. PMID:25653585

  6. New Challenges in Computational Thermal Hydraulics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadigaroglu, George; Lakehal, Djamel

    New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less

  7. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  8. An equation-free approach to agent-based computation: Bifurcation analysis and control of stationary states

    NASA Astrophysics Data System (ADS)

    Siettos, C. I.; Gear, C. W.; Kevrekidis, I. G.

    2012-08-01

    We show how the equation-free approach can be exploited to enable agent-based simulators to perform system-level computations such as bifurcation, stability analysis and controller design. We illustrate these tasks through an event-driven agent-based model describing the dynamic behaviour of many interacting investors in the presence of mimesis. Using short bursts of appropriately initialized runs of the detailed, agent-based simulator, we construct the coarse-grained bifurcation diagram of the (expected) density of agents and investigate the stability of its multiple solution branches. When the mimetic coupling between agents becomes strong enough, the stable stationary state loses its stability at a coarse turning point bifurcation. We also demonstrate how the framework can be used to design a wash-out dynamic controller that stabilizes open-loop unstable stationary states even under model uncertainty.

  9. The calculation of electromagnetic fields in the Fresnel and Fraunhofer regions using numerical integration methods

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1971-01-01

    Some results obtained with a digital computer program written at Goddard Space Flight Center to obtain electromagnetic fields scattered by perfectly reflecting surfaces are presented. For purposes of illustration a paraboloidal reflector was illuminated at radio frequencies in the simulation for both receiving and transmitting modes of operation. Fields were computed in the Fresnel and Fraunhofer regions. A dual-reflector system (Cassegrain) was also simulated for the transmitting case, and fields were computed in the Fraunhofer region. Appended results include derivations which show that the vector Kirchhoff-Kottler formulation has an equivalent form requiring only incident magnetic fields as a driving function. Satisfaction of the radiation conditions at infinity by the equivalent form is demonstrated by a conversion from Cartesian to spherical vector operators. A subsequent development presents the formulation by which Fresnel or Fraunhofer patterns are obtainable for dual-reflector systems. A discussion of the time-average Poynting vector is also appended.

  10. Model reduction for agent-based social simulation: coarse-graining a civil violence model.

    PubMed

    Zou, Yu; Fonoberov, Vladimir A; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G

    2012-06-01

    Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).

  11. Model reduction for agent-based social simulation: Coarse-graining a civil violence model

    NASA Astrophysics Data System (ADS)

    Zou, Yu; Fonoberov, Vladimir A.; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G.

    2012-06-01

    Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).

  12. Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.

    PubMed

    Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A

    2015-10-26

    Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Combining Experiments and Simulation of Gas Absorption for Teaching Mass Transfer Fundamentals: Removing CO2 from Air Using Water and NaOH

    ERIC Educational Resources Information Center

    Clark, William M.; Jackson, Yaminah Z.; Morin, Michael T.; Ferraro, Giacomo P.

    2011-01-01

    Laboratory experiments and computer models for studying the mass transfer process of removing CO2 from air using water or dilute NaOH solution as absorbent are presented. Models tie experiment to theory and give a visual representation of concentration profiles and also illustrate the two-film theory and the relative importance of various…

  14. The envelope of ballistic trajectories and elliptic orbits

    NASA Astrophysics Data System (ADS)

    Butikov, Eugene I.

    2015-11-01

    Simple geometric derivations are given for the shape of the "safety domain" boundary for the family of Keplerian orbits of equal energy in a central gravitational field and for projectile trajectories in a uniform field. Examples of practical uses of the envelope of the family of orbits are discussed and illustrated by computer simulations. This material is appropriate for physics teachers and undergraduate students studying classical mechanics and orbital motions.

  15. A Computer-Based Undergraduate Exercise Using Internet-Accessible Simulation Software for the Study of Retention Behavior and Optimization of Separation Conditions in Ion Chromatography

    ERIC Educational Resources Information Center

    Haddad, Paul R.; Shaw, Matthew J.; Madden, John E.; Dicinoski, Greg W.

    2004-01-01

    The ability to scan retention data over a wide range of eluent composition opens up the possibility of a computerized selection of the optimal separation conditions. The major characteristics of retention behavior, peak-shape effects and pH effects evident in ion chromatography (IC) using common stationary phases and eluents are illustrated.

  16. A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2016-12-01

    Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.

  17. Simulation techniques in hyperthermia treatment planning

    PubMed Central

    Paulides, MM; Stauffer, PR; Neufeld, E; Maccarini, P; Kyriakou, A; Canters, RAM; Diederich, C; Bakker, JF; Van Rhoon, GC

    2013-01-01

    Clinical trials have shown that hyperthermia (HT), i.e. an increase of tissue temperature to 39-44°C, significantly enhance radiotherapy and chemotherapy effectiveness (1). Driven by the developments in computational techniques and computing power, personalized hyperthermia treatment planning (HTP) has matured and has become a powerful tool for optimizing treatment quality. Electromagnetic, ultrasound, and thermal simulations using realistic clinical setups are now being performed to achieve patient-specific treatment optimization. In addition, extensive studies aimed to properly implement novel HT tools and techniques, and to assess the quality of HT, are becoming more common. In this paper, we review the simulation tools and techniques developed for clinical hyperthermia, and evaluate their current status on the path from “model” to “clinic”. In addition, we illustrate the major techniques employed for validation and optimization. HTP has become an essential tool for improvement, control, and assessment of HT treatment quality. As such, it plays a pivotal role in the quest to establish HT as an efficacious addition to multi-modality treatment of cancer. PMID:23672453

  18. Elastic constants from microscopic strain fluctuations

    PubMed

    Sengupta; Nielaba; Rao; Binder

    2000-02-01

    Fluctuations of the instantaneous local Lagrangian strain epsilon(ij)(r,t), measured with respect to a static "reference" lattice, are used to obtain accurate estimates of the elastic constants of model solids from atomistic computer simulations. The measured strains are systematically coarse-grained by averaging them within subsystems (of size L(b)) of a system (of total size L) in the canonical ensemble. Using a simple finite size scaling theory we predict the behavior of the fluctuations as a function of L(b)/L and extract elastic constants of the system in the thermodynamic limit at nonzero temperature. Our method is simple to implement, efficient, and general enough to be able to handle a wide class of model systems, including those with singular potentials without any essential modification. We illustrate the technique by computing isothermal elastic constants of "hard" and "soft" disk triangular solids in two dimensions from Monte Carlo and molecular dynamics simulations. We compare our results with those from earlier simulations and theory.

  19. Overcoming free energy barriers using unconstrained molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hénin, Jérôme; Chipot, Christophe

    2004-08-01

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate ξ is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of ξ is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method.

  20. Overcoming free energy barriers using unconstrained molecular dynamics simulations.

    PubMed

    Hénin, Jérôme; Chipot, Christophe

    2004-08-15

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate xi is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of xi is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method. (c) 2004 American Institute of Physics.

  1. DEM GPU studies of industrial scale particle simulations for granular flow civil engineering applications

    NASA Astrophysics Data System (ADS)

    Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine

    2017-06-01

    The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.

  2. Development of capability for microtopography-resolving simulations of hydrologic processes in permafrost affected regions

    NASA Astrophysics Data System (ADS)

    Painter, S.; Moulton, J. D.; Berndt, M.; Coon, E.; Garimella, R.; Lewis, K. C.; Manzini, G.; Mishra, P.; Travis, B. J.; Wilson, C. J.

    2012-12-01

    The frozen soils of the Arctic and subarctic regions contain vast amounts of stored organic carbon. This carbon is vulnerable to release to the atmosphere as temperatures warm and permafrost degrades. Understanding the response of the subsurface and surface hydrologic system to degrading permafrost is key to understanding the rate, timing, and chemical form of potential carbon releases to the atmosphere. Simulating the hydrologic system in degrading permafrost regions is challenging because of the potential for topographic evolution and associated drainage network reorganization as permafrost thaws and massive ground ice melts. The critical process models required for simulating hydrology include subsurface thermal hydrology of freezing/thawing soils, thermal processes within ice wedges, mechanical deformation processes, overland flow, and surface energy balances including snow dynamics. A new simulation tool, the Arctic Terrestrial Simulator (ATS), is being developed to simulate these coupled processes. The computational infrastructure must accommodate fully unstructured grids that track evolving topography, allow accurate solutions on distorted grids, provide robust and efficient solutions on highly parallel computer architectures, and enable flexibility in the strategies for coupling among the various processes. The ATS is based on Amanzi (Moulton et al. 2012), an object-oriented multi-process simulator written in C++ that provides much of the necessary computational infrastructure. Status and plans for the ATS including major hydrologic process models and validation strategies will be presented. Highly parallel simulations of overland flow using high-resolution digital elevation maps of polygonal patterned ground landscapes demonstrate the feasibility of the approach. Simulations coupling three-phase subsurface thermal hydrology with a simple thaw-induced subsidence model illustrate the strong feedbacks among the processes. D. Moulton, M. Berndt, M. Day, J. Meza, et al., High-Level Design of Amanzi, the Multi-Process High Performance Computing Simulator, Technical Report ASCEM-HPC-2011-03-1, DOE Environmental Management, 2012.

  3. Monte Carlo simulations in X-ray imaging

    NASA Astrophysics Data System (ADS)

    Giersch, Jürgen; Durst, Jürgen

    2008-06-01

    Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.

  4. Rapid high performance liquid chromatography method development with high prediction accuracy, using 5cm long narrow bore columns packed with sub-2microm particles and Design Space computer modeling.

    PubMed

    Fekete, Szabolcs; Fekete, Jeno; Molnár, Imre; Ganzler, Katalin

    2009-11-06

    Many different strategies of reversed phase high performance liquid chromatographic (RP-HPLC) method development are used today. This paper describes a strategy for the systematic development of ultrahigh-pressure liquid chromatographic (UHPLC or UPLC) methods using 5cmx2.1mm columns packed with sub-2microm particles and computer simulation (DryLab((R)) package). Data for the accuracy of computer modeling in the Design Space under ultrahigh-pressure conditions are reported. An acceptable accuracy for these predictions of the computer models is presented. This work illustrates a method development strategy, focusing on time reduction up to a factor 3-5, compared to the conventional HPLC method development and exhibits parts of the Design Space elaboration as requested by the FDA and ICH Q8R1. Furthermore this paper demonstrates the accuracy of retention time prediction at elevated pressure (enhanced flow-rate) and shows that the computer-assisted simulation can be applied with sufficient precision for UHPLC applications (p>400bar). Examples of fast and effective method development in pharmaceutical analysis, both for gradient and isocratic separations are presented.

  5. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  6. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues.

    PubMed

    Farisco, Michele; Kotaleski, Jeanette H; Evers, Kathinka

    2018-01-01

    Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain's operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.

  7. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues

    PubMed Central

    Farisco, Michele; Kotaleski, Jeanette H.; Evers, Kathinka

    2018-01-01

    Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs. PMID:29740372

  8. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGES

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  9. Addressing the challenges of standalone multi-core simulations in molecular dynamics

    NASA Astrophysics Data System (ADS)

    Ocaya, R. O.; Terblans, J. J.

    2017-07-01

    Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.

  10. Numerical Zooming Between a NPSS Engine System Simulation and a One-Dimensional High Compressor Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, Gregory; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer along with the concept of numerical zooming between zero-dimensional to one-, two-, and three-dimensional component engine codes. In addition, the NPSS is refining the computing and communication technologies necessary to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Of the different technology areas that contribute to the development of the NPSS Environment, the subject of this paper is a discussion on numerical zooming between a NPSS engine simulation and higher fidelity representations of the engine components (fan, compressor, burner, turbines, etc.). What follows is a description of successfully zooming one-dimensional (row-by-row) high-pressure compressor analysis results back to a zero-dimensional NPSS engine simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the capability of the engine system simulation and increase the level of virtual test conducted prior to committing the design to hardware.

  11. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  12. Neuronify: An Educational Simulator for Neural Circuits.

    PubMed

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Våvang Solbrå, Andreas; Tennøe, Simen; Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne; Hafting, Torkel; Einevoll, Gaute T

    2017-01-01

    Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux).

  13. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  14. Neuronify: An Educational Simulator for Neural Circuits

    PubMed Central

    Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne

    2017-01-01

    Abstract Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux). PMID:28321440

  15. Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloutman, L.D.

    2000-04-01

    Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less

  16. An Examination of Parameters Affecting Large Eddy Simulations of Flow Past a Square Cylinder

    NASA Technical Reports Server (NTRS)

    Mankbadi, M. R.; Georgiadis, N. J.

    2014-01-01

    Separated flow over a bluff body is analyzed via large eddy simulations. The turbulent flow around a square cylinder features a variety of complex flow phenomena such as highly unsteady vortical structures, reverse flow in the near wall region, and wake turbulence. The formation of spanwise vortices is often times artificially suppressed in computations by either insufficient depth or a coarse spanwise resolution. As the resolution is refined and the domain extended, the artificial turbulent energy exchange between spanwise and streamwise turbulence is eliminated within the wake region. A parametric study is performed highlighting the effects of spanwise vortices where the spanwise computational domain's resolution and depth are varied. For Re=22,000, the mean and turbulent statistics computed from the numerical large eddy simulations (NLES) are in good agreement with experimental data. Von-Karman shedding is observed in the wake of the cylinder. Mesh independence is illustrated by comparing a mesh resolution of 2 million to 16 million. Sensitivities to time stepping were minimized and sampling frequency sensitivities were nonpresent. While increasing the spanwise depth and resolution can be costly, this practice was found to be necessary to eliminating the artificial turbulent energy exchange.

  17. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  18. Random walk on lattices: Graph-theoretic approach to simulating long-range diffusion-attachment growth models

    NASA Astrophysics Data System (ADS)

    Limkumnerd, Surachate

    2014-03-01

    Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.

  19. Least significant qubit algorithm for quantum images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-11-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  20. Applications of the similarity relations in radiative transfer to remote sensing implementation and flux simulation

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ding, J.; Tang, G.; King, M. D.; Platnick, S. E.; Meyer, K.; Mlawer, E. J.

    2017-12-01

    Van de Hulst (1974) showed several quasi-invariant quantities in radiative transfer concerning multiple scattering. Recently, we illustrated that the aforesaid quasi-invariant quantities are useful in remote sensing of ice cloud properties from spaceborne radiometric observations (Ding et al. 2017). Specifically, the overall performance of an ice cloud optical property model can be estimated without carrying out detailed retrieval implementation. In this presentation, we will review the radiative transfer similarity relations and some recent results including the study by Ding et al. (2017). Furthermore, we will illustrate an application of the similarity relations to improvement of broadband radiative flux computation. For example, the Rapid Radiative Transfer Model (RRTM, Mlawer et al, 1999) does not consider multiple scattering in the longwave spectral regime (RRTMG-LW) ("G" indicates a version suitable for GCM applications). We show that the similarity relations can be used to effectively improve the accuracy of RRTMG-LW without increasing computational effort.

  1. Diabat Interpolation for Polymorph Free-Energy Differences.

    PubMed

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  2. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  3. Building an infrastructure at PICKSC for the educational use of kinetic software tools

    NASA Astrophysics Data System (ADS)

    Mori, W. B.; Decyk, V. K.; Tableman, A.; Fonseca, R. A.; Tsung, F. S.; Hu, Q.; Winjum, B. J.; Amorim, L. D.; An, W.; Dalichaouch, T. N.; Davidson, A.; Joglekar, A.; Li, F.; May, J.; Touati, M.; Xu, X. L.; Yu, P.

    2016-10-01

    One aim of the Particle-In-Cell and Kinetic Simulation Center (PICKSC) at UCLA is to coordinate a community development of educational software for undergraduate and graduate courses in plasma physics and computer science. The rich array of physical behaviors exhibited by plasmas can be difficult to grasp by students. If they are given the ability to quickly and easily explore plasma physics through kinetic simulations, and to make illustrative visualizations of plasma waves, particle motion in electromagnetic fields, instabilities, or other phenomena, then they can be equipped with first-hand experiences that inform and contextualize conventional texts and lectures. We are developing an infrastructure for any interested persons to take our kinetic codes, run them without any prerequisite knowledge, and explore desired scenarios. Furthermore, we are actively interested in any ideas or input from other plasma physicists. This poster aims to illustrate what we have developed and gather a community of interested users and developers. Supported by NSF under Grant ACI-1339893.

  4. Generalization of Clustering Coefficients to Signed Correlation Networks

    PubMed Central

    Costantini, Giulio; Perugini, Marco

    2014-01-01

    The recent interest in network analysis applications in personality psychology and psychopathology has put forward new methodological challenges. Personality and psychopathology networks are typically based on correlation matrices and therefore include both positive and negative edge signs. However, some applications of network analysis disregard negative edges, such as computing clustering coefficients. In this contribution, we illustrate the importance of the distinction between positive and negative edges in networks based on correlation matrices. The clustering coefficient is generalized to signed correlation networks: three new indices are introduced that take edge signs into account, each derived from an existing and widely used formula. The performances of the new indices are illustrated and compared with the performances of the unsigned indices, both on a signed simulated network and on a signed network based on actual personality psychology data. The results show that the new indices are more resistant to sample variations in correlation networks and therefore have higher convergence compared with the unsigned indices both in simulated networks and with real data. PMID:24586367

  5. Multiphase, multi-electrode Joule heat computations for glass melter and in situ vitrification simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowery, P.S.; Lessor, D.L.

    Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less

  6. Real time evolution at finite temperatures with operator space matrix product states

    NASA Astrophysics Data System (ADS)

    Pižorn, Iztok; Eisler, Viktor; Andergassen, Sabine; Troyer, Matthias

    2014-07-01

    We propose a method to simulate the real time evolution of one-dimensional quantum many-body systems at finite temperature by expressing both the density matrices and the observables as matrix product states. This allows the calculation of expectation values and correlation functions as scalar products in operator space. The simulations of density matrices in inverse temperature and the local operators in the Heisenberg picture are independent and result in a grid of expectation values for all intermediate temperatures and times. Simulations can be performed using real arithmetics with only polynomial growth of computational resources in inverse temperature and time for integrable systems. The method is illustrated for the XXZ model and the single impurity Anderson model.

  7. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  8. On Babinet's principle and diffraction associated with an arbitrary particle.

    PubMed

    Sun, Bingqiang; Yang, Ping; Kattawar, George W; Mishchenko, Michael I

    2017-12-01

    Babinet's principle is widely used to compute the diffraction by a particle. However, the diffraction by a 3-D object is not totally the same as that simulated with Babinet's principle. This Letter uses a surface integral equation to exactly formulate the diffraction by an arbitrary particle and illustrate the condition for the applicability of Babinet's principle. The present results may serve to close the debate on the diffraction formalism.

  9. Simulation based planning of surgical interventions in pediatric cardiology

    NASA Astrophysics Data System (ADS)

    Marsden, Alison L.

    2013-10-01

    Hemodynamics plays an essential role in the progression and treatment of cardiovascular disease. However, while medical imaging provides increasingly detailed anatomical information, clinicians often have limited access to hemodynamic data that may be crucial to patient risk assessment and treatment planning. Computational simulations can now provide detailed hemodynamic data to augment clinical knowledge in both adult and pediatric applications. There is a particular need for simulation tools in pediatric cardiology, due to the wide variation in anatomy and physiology in congenital heart disease patients, necessitating individualized treatment plans. Despite great strides in medical imaging, enabling extraction of flow information from magnetic resonance and ultrasound imaging, simulations offer predictive capabilities that imaging alone cannot provide. Patient specific simulations can be used for in silico testing of new surgical designs, treatment planning, device testing, and patient risk stratification. Furthermore, simulations can be performed at no direct risk to the patient. In this paper, we outline the current state of the art in methods for cardiovascular blood flow simulation and virtual surgery. We then step through pressing challenges in the field, including multiscale modeling, boundary condition selection, optimization, and uncertainty quantification. Finally, we summarize simulation results of two representative examples from pediatric cardiology: single ventricle physiology, and coronary aneurysms caused by Kawasaki disease. These examples illustrate the potential impact of computational modeling tools in the clinical setting.

  10. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    NASA Astrophysics Data System (ADS)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  11. DMG-α--a computational geometry library for multimolecular systems.

    PubMed

    Szczelina, Robert; Murzyn, Krzysztof

    2014-11-24

    The DMG-α library grants researchers in the field of computational biology, chemistry, and biophysics access to an open-sourced, easy to use, and intuitive software for performing fine-grained geometric analysis of molecular systems. The library is capable of computing power diagrams (weighted Voronoi diagrams) in three dimensions with 3D periodic boundary conditions, computing approximate projective 2D Voronoi diagrams on arbitrarily defined surfaces, performing shape properties recognition using α-shape theory and can do exact Solvent Accessible Surface Area (SASA) computation. The software is written mainly as a template-based C++ library for greater performance, but a rich Python interface (pydmga) is provided as a convenient way to manipulate the DMG-α routines. To illustrate possible applications of the DMG-α library, we present results of sample analyses which allowed to determine nontrivial geometric properties of two Escherichia coli-specific lipids as emerging from molecular dynamics simulations of relevant model bilayers.

  12. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  13. A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm

    PubMed Central

    Chen, Jui-Le; Yang, Chu-Sing

    2013-01-01

    The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864

  14. Characterizing representational learning: A combined simulation and tutorial on perturbation theory

    NASA Astrophysics Data System (ADS)

    Kohnle, Antje; Passante, Gina

    2017-12-01

    Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them ("representational competence") is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of Washington style tutorials both have affordances to support representational learning. This article describes work to characterize students' spontaneous use of representations before and after working with a combined simulation and tutorial on first-order energy corrections in the context of quantum-mechanical time-independent perturbation theory. Data were collected from two institutions using pre-, mid-, and post-tests to assess short- and long-term gains. A representational competence level framework was adapted to devise level descriptors for the assessment items. The results indicate an increase in the number of representations used by students and the consistency between them following the combined simulation tutorial. The distributions of representational competence levels suggest a shift from perceptual to semantic use of representations based on their underlying meaning. In terms of activity design, this study illustrates the need to support students in making sense of the representations shown in a simulation and in learning to choose the most appropriate representation for a given task. In terms of characterizing representational abilities, this study illustrates the usefulness of a framework focusing on perceptual, syntactic, and semantic use of representations.

  15. Conducting Simulation Studies in the R Programming Environment.

    PubMed

    Hallgren, Kevin A

    2013-10-12

    Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtaining accurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted to researchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulation studies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a) the use of simulations to answer a novel question about statistical analysis, (b) the use of simulations to estimate statistical power, and (c) the use of simulations to obtain confidence intervals of parameter estimates through bootstrapping. Results and fully annotated syntax from these examples are provided.

  16. A fast, parallel algorithm for distant-dependent calculation of crystal properties

    NASA Astrophysics Data System (ADS)

    Stein, Matthew

    2017-12-01

    A fast, parallel algorithm for distant-dependent calculation and simulation of crystal properties is presented along with speedup results and methods of application. An illustrative example is used to compute the Lennard-Jones lattice constants up to 32 significant figures for 4 ≤ p ≤ 30 in the simple cubic, face-centered cubic, body-centered cubic, hexagonal-close-pack, and diamond lattices. In most cases, the known precision of these constants is more than doubled, and in some cases, corrected from previously published figures. The tools and strategies to make this computation possible are detailed along with application to other potentials, including those that model defects.

  17. The focal plane reception pattern calculation for a paraboloidal antenna with a nearby fence

    NASA Technical Reports Server (NTRS)

    Schmidt, Richard F.; Cheng, Hwai-Soon; Kao, Michael W.

    1987-01-01

    A computer simulation program is described which is used to estimate the effects of a proximate diffraction fence on the performance of paraboloid antennas. The computer program is written in FORTRAN. The physical problem, mathematical formulation and coordinate references are described. The main control structure of the program and the function of the individual subroutines are discussed. The Job Control Language set-up and program instruction are provided in the user's instruction to help users execute the present program. A sample problem with an appropriate output listing is made available as an illustration of the usage of the program.

  18. Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition

    NASA Astrophysics Data System (ADS)

    Alouges, François; Aussal, Matthieu; Parolin, Emile

    2017-07-01

    This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.

  19. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  20. Unsteady viscous calculations of supersonic flows past deep and shallow three-dimensional cavities

    NASA Technical Reports Server (NTRS)

    Baysal, O.; Srinivasan, S.; Stallings, R. L.

    1988-01-01

    Computational simulations were performed for supersonic, turbulent flows over deep and shallow three-dimensional cavities. The width and the depth of these cavities were fixed at 2.5 in. and 0.5 in., respectively. Length-to-depth ratio of the deep cavity was 6 and that of the shallow cavity was 16. Freestream values of Mach number and Reynolds number were 1.50 and 2.0 x 10 to the 6th/ft., respectively, at a total temperature of 585 R. The thickness of the turbulent boundary layer at the front lip of the cavity was 0.2 in. Simulations of these oscillatory flows were generated through time-accurate solutions of Reynolds-averaged full Navier-Stokes equations using the explicit MacCormack scheme. The solutions are validated through comparisons with experimental data. The features of open and closed cavity flows and effects of the third dimension are illustrated through computational graphics.

  1. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  2. Lattice Boltzmann simulation of antiplane shear loading of a stationary crack

    NASA Astrophysics Data System (ADS)

    Schlüter, Alexander; Kuhn, Charlotte; Müller, Ralf

    2018-01-01

    In this work, the lattice Boltzmann method is applied to study the dynamic behaviour of linear elastic solids under antiplane shear deformation. In this case, the governing set of partial differential equations reduces to a scalar wave equation for the out of plane displacement in a two dimensional domain. The lattice Boltzmann approach developed by Guangwu (J Comput Phys 161(1):61-69, 2000) in 2006 is used to solve the problem numerically. Some aspects of the scheme are highlighted, including the treatment of the boundary conditions. Subsequently, the performance of the lattice Boltzmann scheme is tested for a stationary crack problem for which an analytic solution exists. The treatment of cracks is new compared to the examples that are discussed in Guangwu's work. Furthermore, the lattice Boltzmann simulations are compared to finite element computations. Finally, the influence of the lattice Boltzmann relaxation parameter on the stability of the scheme is illustrated.

  3. Computation models simulating notochordal cell extinction during early ageing of an intervertebral disc.

    PubMed

    Louman-Gardiner, K M; Coombe, D; Hunter, C J

    2011-12-01

    Lower back pain due to intervertebral disc (IVD) degeneration is a prevalent problem which drastically affects the quality of life of millions of sufferers. Healthy IVDs begin with high populations of notochordal cells in the nucleus pulposus, while by the second stage of degeneration, these cells will be replaced by chondrocyte-like cells. Because the IVD is avascular, these cells rely on passive diffusion of nutrients to survive. It is thought that this transition in cell phenotype causes the shift of the IVD's physical properties, which impede the flow of nutrients. Our computational model of the IVD illustrates its ability to simulate the evolving chemical and mechanical environments occurring during the early ageing process. We demonstrate that, due to the insufficient nutrient supply and accompanying changes in physical properties of the IVD, there was a resultant exponential decay in the number of notochordal cells over time.

  4. A New Low Complexity Angle of Arrival Algorithm for 1D and 2D Direction Estimation in MIMO Smart Antenna Systems

    PubMed Central

    Al-Sadoon, Mohammed A. G.; Zuid, Abdulkareim; Jones, Stephen M. R.; Noras, James M.

    2017-01-01

    This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots. PMID:29140313

  5. A New Low Complexity Angle of Arrival Algorithm for 1D and 2D Direction Estimation in MIMO Smart Antenna Systems.

    PubMed

    Al-Sadoon, Mohammed A G; Ali, Nazar T; Dama, Yousf; Zuid, Abdulkareim; Jones, Stephen M R; Abd-Alhameed, Raed A; Noras, James M

    2017-11-15

    This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots.

  6. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE PAGES

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.; ...

    2017-11-26

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  7. Mechanics of airflow in the human nasal airways.

    PubMed

    Doorly, D J; Taylor, D J; Schroter, R C

    2008-11-30

    The mechanics of airflow in the human nasal airways is reviewed, drawing on the findings of experimental and computational model studies. Modelling inevitably requires simplifications and assumptions, particularly given the complexity of the nasal airways. The processes entailed in modelling the nasal airways (from defining the model, to its production and, finally, validating the results) is critically examined, both for physical models and for computational simulations. Uncertainty still surrounds the appropriateness of the various assumptions made in modelling, particularly with regard to the nature of flow. New results are presented in which high-speed particle image velocimetry (PIV) and direct numerical simulation are applied to investigate the development of flow instability in the nasal cavity. These illustrate some of the improved capabilities afforded by technological developments for future model studies. The need for further improvements in characterising airway geometry and flow together with promising new methods are briefly discussed.

  8. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  9. Efficient field-theoretic simulation of polymer solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106

    2014-12-14

    We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less

  10. Probabilistic Composite Design

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Probabilistic composite design is described in terms of a computational simulation. This simulation tracks probabilistically the composite design evolution from constituent materials, fabrication process, through composite mechanics and structural components. Comparisons with experimental data are provided to illustrate selection of probabilistic design allowables, test methods/specimen guidelines, and identification of in situ versus pristine strength, For example, results show that: in situ fiber tensile strength is 90% of its pristine strength; flat-wise long-tapered specimens are most suitable for setting ply tensile strength allowables: a composite radome can be designed with a reliability of 0.999999; and laminate fatigue exhibits wide-spread scatter at 90% cyclic-stress to static-strength ratios.

  11. Computer-generated, calligraphic, full-spectrum color system for visual simulation landing approach maneuvers

    NASA Technical Reports Server (NTRS)

    Chase, W. D.

    1975-01-01

    The calligraphic chromatic projector described was developed to improve the perceived realism of visual scene simulation ('out-the-window visuals'). The optical arrangement of the projector is illustrated and discussed. The device permits drawing 2000 vectors in as many as 500 colors, all above critical flicker frequencies, and use of high scene resolution and brightness at an acceptable level to the pilot, with the maximum system capabilities of 1000 lines and 1000 fL. The device for generating the colors is discussed, along with an experiment conducted to demonstrate potential improvements in performance and pilot opinion. Current research work and future research plans are noted.

  12. RARtool: A MATLAB Software Package for Designing Response-Adaptive Randomized Clinical Trials with Time-to-Event Outcomes.

    PubMed

    Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee

    2015-08-01

    Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.

  13. X-ray solution scattering combined with computation characterizing protein folds and multiple conformational states : computation and application.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, S.; Park, S.; Makowski, L.

    Small angle X-ray scattering (SAXS) is an increasingly powerful technique to characterize the structure of biomolecules in solution. We present a computational method for accurately and efficiently computing the solution scattering curve from a protein with dynamical fluctuations. The method is built upon a coarse-grained (CG) representation of the protein. This CG approach takes advantage of the low-resolution character of solution scattering. It allows rapid determination of the scattering pattern from conformations extracted from CG simulations to obtain scattering characterization of the protein conformational landscapes. Important elements incorporated in the method include an effective residue-based structure factor for each aminomore » acid, an explicit treatment of the hydration layer at the surface of the protein, and an ensemble average of scattering from all accessible conformations to account for macromolecular flexibility. The CG model is calibrated and illustrated to accurately reproduce the experimental scattering curve of Hen egg white lysozyme. We then illustrate the computational method by calculating the solution scattering pattern of several representative protein folds and multiple conformational states. The results suggest that solution scattering data, when combined with a reliable computational method, have great potential for a better structural description of multi-domain complexes in different functional states, and for recognizing structural folds when sequence similarity to a protein of known structure is low. Possible applications of the method are discussed.« less

  14. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less

  15. Can one trust quantum simulators?

    PubMed

    Hauke, Philipp; Cucchietti, Fernando M; Tagliacozzo, Luca; Deutsch, Ivan; Lewenstein, Maciej

    2012-08-01

    Various fundamental phenomena of strongly correlated quantum systems such as high-T(c) superconductivity, the fractional quantum-Hall effect and quark confinement are still awaiting a universally accepted explanation. The main obstacle is the computational complexity of solving even the most simplified theoretical models which are designed to capture the relevant quantum correlations of the many-body system of interest. In his seminal 1982 paper (Feynman 1982 Int. J. Theor. Phys. 21 467), Richard Feynman suggested that such models might be solved by 'simulation' with a new type of computer whose constituent parts are effectively governed by a desired quantum many-body dynamics. Measurements on this engineered machine, now known as a 'quantum simulator,' would reveal some unknown or difficult to compute properties of a model of interest. We argue that a useful quantum simulator must satisfy four conditions: relevance, controllability, reliability and efficiency. We review the current state of the art of digital and analog quantum simulators. Whereas so far the majority of the focus, both theoretically and experimentally, has been on controllability of relevant models, we emphasize here the need for a careful analysis of reliability and efficiency in the presence of imperfections. We discuss how disorder and noise can impact these conditions, and illustrate our concerns with novel numerical simulations of a paradigmatic example: a disordered quantum spin chain governed by the Ising model in a transverse magnetic field. We find that disorder can decrease the reliability of an analog quantum simulator of this model, although large errors in local observables are introduced only for strong levels of disorder. We conclude that the answer to the question 'Can we trust quantum simulators?' is … to some extent.

  16. Biomechanical testing simulation of a cadaver spine specimen: development and evaluation study.

    PubMed

    Ahn, Hyung Soo; DiAngelo, Denis J

    2007-05-15

    This article describes a computer model of the cadaver cervical spine specimen and virtual biomechanical testing. To develop a graphics-oriented, multibody model of a cadaver cervical spine and to build a virtual laboratory simulator for the biomechanical testing using physics-based dynamic simulation techniques. Physics-based computer simulations apply the laws of physics to solid bodies with defined material properties. This technique can be used to create a virtual simulator for the biomechanical testing of a human cadaver spine. An accurate virtual model and simulation would complement tissue-based in vitro studies by providing a consistent test bed with minimal variability and by reducing cost. The geometry of cervical vertebrae was created from computed tomography images. Joints linking adjacent vertebrae were modeled as a triple-joint complex, comprised of intervertebral disc joints in the anterior region, 2 facet joints in the posterior region, and the surrounding ligament structure. A virtual laboratory simulation of an in vitro testing protocol was performed to evaluate the model responses during flexion, extension, and lateral bending. For kinematic evaluation, the rotation of motion segment unit, coupling behaviors, and 3-dimensional helical axes of motion were analyzed. The simulation results were in correlation with the findings of in vitro tests and published data. For kinetic evaluation, the forces of the intervertebral discs and facet joints of each segment were determined and visually animated. This methodology produced a realistic visualization of in vitro experiment, and allowed for the analyses of the kinematics and kinetics of the cadaver cervical spine. With graphical illustrations and animation features, this modeling technique has provided vivid and intuitive information.

  17. Mathematical Analysis of an Epidemic System in the Presence of Optimal Control and Population Dispersal

    NASA Astrophysics Data System (ADS)

    Nandi, Swapan Kumar; Jana, Soovoojeet; Mandal, Manotosh; Kar, T. K.

    In this paper, we proposed and analyzed a susceptible-infected-recovered (SIR) type epidemic model to investigate the effect of transport-related infectious diseases namely tuberculosis, measles, rubella, influenza, sexually transmitted diseases, etc. The existence and stability criteria of both the diseases include free equilibrium point and endemic equilibrium point which are established and the threshold parametric condition for which the system passes through a transcritical bifurcation is also obtained. Optimal control strategy for control parameters is formulated and solved both theoretically and numerically. Lastly, we not only illustrate our theoretical results through graphical illustrations but also computer simulation is used to show that our model would be a good model to study the SARS epidemic in 2003.

  18. Simulation of existing gas-fuelled conventional steam power plant using Cycle Tempo

    NASA Astrophysics Data System (ADS)

    Jamel, M. S.; Abd Rahman, A.; Shamsuddin, A. H.

    2013-06-01

    Simulation of a 200 MW gas-fuelled conventional steam power plant located in Basra, Iraq was carried out. The thermodynamic performance of the considered power plant is estimated by a system simulation. A flow-sheet computer program, "Cycle-Tempo" is used for the study. The plant components and piping systems were considered and described in detail. The simulation results were verified against data gathered from the log sheet obtained from the station during its operation hours and good results were obtained. Operational factors like the stack exhaust temperature and excess air percentage were studied and discussed, as were environmental factors, such as ambient air temperature and water inlet temperature. In addition, detailed exergy losses were illustrated and describe the temperature profiles for the main plant components. The results prompted many suggestions for improvement of the plant performance.

  19. eLoom and Flatland: specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures.

    PubMed

    Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J

    2003-01-01

    eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.

  20. GTKDynamo: a PyMOL plug-in for QC/MM hybrid potential simulations

    PubMed Central

    Bachega, José Fernando R.; Timmers, Luís Fernando S.M.; Assirati, Lucas; Bachega, Leonardo R.; Field, Martin J.; Wymore, Troy

    2014-01-01

    Hybrid quantum chemical (QC)/molecular mechanical (MM) potentials are very powerful tools for molecular simulation. They are especially useful for studying processes in condensed phase systems, such as chemical reactions, that involve a relatively localized change in electronic structure and where the surrounding environment contributes to these changes but can be represented with more computationally efficient functional forms. Despite their utility, however, these potentials are not always straightforward to apply since the extent of significant electronic structure changes occurring in the condensed phase process may not be intuitively obvious. To facilitate their use we have developed an open-source graphical plug-in, GTKDynamo, that links the PyMOL visualization program and the pDynamo QC/MM simulation library. This article describes the implementation of GTKDynamo and its capabilities and illustrates its application to QC/MM simulations. PMID:24137667

  1. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    NASA Astrophysics Data System (ADS)

    Williamson, R. L.; Capps, N. A.; Liu, W.; Rashid, Y. R.; Wirth, B. D.

    2016-11-01

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial ( R- Z) or plane radial-circumferential ( R- θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. In comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.

  2. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    DOE PAGES

    Williamson, R. L.; Capps, N. A.; Liu, W.; ...

    2016-09-27

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial (R-Z) ormore » plane radial-circumferential (R-θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used in this paper to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. Finally, in comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.« less

  3. Expert knowledge elicitation using computer simulation: the organization of frail elderly case management as an illustration.

    PubMed

    Chiêm, Jean-Christophe; Van Durme, Thérèse; Vandendorpe, Florence; Schmitz, Olivier; Speybroeck, Niko; Cès, Sophie; Macq, Jean

    2014-08-01

    Various elderly case management projects have been implemented in Belgium. This type of long-term health care intervention involves contextual factors and human interactions. These underlying complex mechanisms can be usefully informed with field experts' knowledge, which are hard to make explicit. However, computer simulation has been suggested as one possible method of overcoming the difficulty of articulating such elicited qualitative views. A simulation model of case management was designed using an agent-based methodology, based on the initial qualitative research material. Variables and rules of interaction were formulated into a simple conceptual framework. This model has been implemented and was used as a support for a structured discussion with experts in case management. The rigorous formulation provided by the agent-based methodology clarified the descriptions of the interventions and the problems encountered regarding: the diverse network topologies of health care actors in the project; the adaptation time required by the intervention; the communication between the health care actors; the institutional context; the organization of the care; and the role of the case manager and his or hers personal ability to interpret the informal demands of the frail older person. The simulation model should be seen primarily as a tool for thinking and learning. A number of insights were gained as part of a valuable cognitive process. Computer simulation supporting field experts' elicitation can lead to better-informed decisions in the organization of complex health care interventions. © 2013 John Wiley & Sons, Ltd.

  4. Real-world hydrologic assessment of a fully-distributed hydrological model in a parallel computing environment

    NASA Astrophysics Data System (ADS)

    Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.

    2011-10-01

    SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.

  5. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes.

    PubMed

    Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando

    2015-07-21

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.

  6. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    NASA Astrophysics Data System (ADS)

    Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando

    2015-07-01

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.

  7. Ab initio density-functional calculations in materials science: from quasicrystals over microporous catalysts to spintronics.

    PubMed

    Hafner, Jürgen

    2010-09-29

    During the last 20 years computer simulations based on a quantum-mechanical description of the interactions between electrons and atomic nuclei have developed an increasingly important impact on materials science, not only in promoting a deeper understanding of the fundamental physical phenomena, but also enabling the computer-assisted design of materials for future technologies. The backbone of atomic-scale computational materials science is density-functional theory (DFT) which allows us to cast the intractable complexity of electron-electron interactions into the form of an effective single-particle equation determined by the exchange-correlation functional. Progress in DFT-based calculations of the properties of materials and of simulations of processes in materials depends on: (1) the development of improved exchange-correlation functionals and advanced post-DFT methods and their implementation in highly efficient computer codes, (2) the development of methods allowing us to bridge the gaps in the temperature, pressure, time and length scales between the ab initio calculations and real-world experiments and (3) the extension of the functionality of these codes, permitting us to treat additional properties and new processes. In this paper we discuss the current status of techniques for performing quantum-based simulations on materials and present some illustrative examples of applications to complex quasiperiodic alloys, cluster-support interactions in microporous acid catalysts and magnetic nanostructures.

  8. Where next for the reproducibility agenda in computational biology?

    PubMed

    Lewis, Joanna; Breeze, Charles E; Charlesworth, Jane; Maclaren, Oliver J; Cooper, Jonathan

    2016-07-15

    The concept of reproducibility is a foundation of the scientific method. With the arrival of fast and powerful computers over the last few decades, there has been an explosion of results based on complex computational analyses and simulations. The reproducibility of these results has been addressed mainly in terms of exact replicability or numerical equivalence, ignoring the wider issue of the reproducibility of conclusions through equivalent, extended or alternative methods. We use case studies from our own research experience to illustrate how concepts of reproducibility might be applied in computational biology. Several fields have developed 'minimum information' checklists to support the full reporting of computational simulations, analyses and results, and standardised data formats and model description languages can facilitate the use of multiple systems to address the same research question. We note the importance of defining the key features of a result to be reproduced, and the expected agreement between original and subsequent results. Dynamic, updatable tools for publishing methods and results are becoming increasingly common, but sometimes come at the cost of clear communication. In general, the reproducibility of computational research is improving but would benefit from additional resources and incentives. We conclude with a series of linked recommendations for improving reproducibility in computational biology through communication, policy, education and research practice. More reproducible research will lead to higher quality conclusions, deeper understanding and more valuable knowledge.

  9. Multiple Exposure of Rendezvous Docking Simulator - Gemini Program

    NASA Image and Video Library

    1964-02-07

    Multiple exposure of Rendezvous Docking Simulator. Francis B. Smith, described the simulator as follows: The rendezvous and docking operation of the Gemini spacecraft with the Agena and of the Apollo Command Module with the Lunar Excursion Module have been the subject of simulator studies for several years. This figure illustrates the Gemini-Agena rendezvous docking simulator at Langley. The Gemini spacecraft was supported in a gimbal system by an overhead crane and gantry arrangement which provided 6 degrees of freedom - roll, pitch, yaw, and translation in any direction - all controllable by the astronaut in the spacecraft. Here again the controls fed into a computer which in turn provided an input to the servos driving the spacecraft so that it responded to control motions in a manner which accurately simulated the Gemini spacecraft. -- Published in Barton C. Hacker and James M. Grimwood, On the Shoulders of Titans: A History of Project Gemini, NASA SP-4203 Francis B. Smith, Simulators for Manned Space Research, Paper presented at the 1966 IEEE International convention, March 21-25, 1966.

  10. Phase behavior and orientational ordering in block copolymers doped with anisotropic nanoparticles

    NASA Astrophysics Data System (ADS)

    Osipov, M. A.; Gorkunov, M. V.; Berezkin, A. V.; Kudryavtsev, Y. V.

    2018-04-01

    A molecular field theory and coarse-grained computer simulations with dissipative particle dynamics have been used to study the spontaneous orientational ordering of anisotropic nanoparticles in the lamellar and hexagonal phases of diblock copolymers and the effect of nanoparticles on the phase behavior of these systems. Both the molecular theory and computer simulations indicate that strongly anisotropic nanoparticles are ordered orientationally mainly in the boundary region between the domains and the nematic order parameter possesses opposite signs in adjacent domains. The orientational order is induced by the boundary and by the interaction between nanoparticles and the monomer units in different domains. In simulations, sufficiently long and strongly selective nanoparticles are ordered also inside the domains. The nematic order parameter and local concentration profiles of nanoparticles have been calculated numerically using the model of a nanoparticle with two interaction centers and also determined using the results of computer simulations. A number of phase diagrams have been obtained which illustrate the effect of nanoparticle selectivity and molar fraction of the stability ranges of various phases. Different morphologies have been identified by analyzing the static structure factor and a phase diagram has been constructed in coordinates' nanoparticle concentration-copolymer composition. Orientational ordering of even a small fraction of nanoparticles may result in a significant increase of the dielectric anisotropy of a polymer nanocomposite, which is important for various applications.

  11. When does a physical system compute?

    PubMed

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  12. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  13. Computational Simulation of Acoustic Modes in Rocket Combustors

    NASA Technical Reports Server (NTRS)

    Harper, Brent (Technical Monitor); Merkle, C. L.; Sankaran, V.; Ellis, M.

    2004-01-01

    A combination of computational fluid dynamic analysis and analytical solutions is being used to characterize the dominant modes in liquid rocket engines in conjunction with laboratory experiments. The analytical solutions are based on simplified geometries and flow conditions and are used for careful validation of the numerical formulation. The validated computational model is then extended to realistic geometries and flow conditions to test the effects of various parameters on chamber modes, to guide and interpret companion laboratory experiments in simplified combustors, and to scale the measurements to engine operating conditions. In turn, the experiments are used to validate and improve the model. The present paper gives an overview of the numerical and analytical techniques along with comparisons illustrating the accuracy of the computations as a function of grid resolution. A representative parametric study of the effect of combustor mean flow Mach number and combustor aspect ratio on the chamber modes is then presented for both transverse and longitudinal modes. The results show that higher mean flow Mach numbers drive the modes to lower frequencies. Estimates of transverse wave mechanics in a high aspect ratio combustor are then contrasted with longitudinal modes in a long and narrow combustor to provide understanding of potential experimental simulations.

  14. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  15. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  16. Semivariogram modeling by weighted least squares

    USGS Publications Warehouse

    Jian, X.; Olea, R.A.; Yu, Y.-S.

    1996-01-01

    Permissible semivariogram models are fundamental for geostatistical estimation and simulation of attributes having a continuous spatiotemporal variation. The usual practice is to fit those models manually to experimental semivariograms. Fitting by weighted least squares produces comparable results to fitting manually in less time, systematically, and provides an Akaike information criterion for the proper comparison of alternative models. We illustrate the application of a computer program with examples showing the fitting of simple and nested models. Copyright ?? 1996 Elsevier Science Ltd.

  17. Europa Scene: Plume, Galileo, Magnetic Field (Artist's Concept)

    NASA Image and Video Library

    2018-05-14

    Artist's illustration of Jupiter and Europa (in the foreground) with the Galileo spacecraft after its pass through a plume erupting from Europa's surface. A new computer simulation gives us an idea of how the magnetic field interacted with a plume. The magnetic field lines (depicted in blue) show how the plume interacts with the ambient flow of Jovian plasma. The red colors on the lines show more dense areas of plasma. https://photojournal.jpl.nasa.gov/catalog/PIA21922

  18. Detecting aircraft with a low-resolution infrared sensor.

    PubMed

    Jakubowicz, Jérémie; Lefebvre, Sidonie; Maire, Florian; Moulines, Eric

    2012-06-01

    Existing computer simulations of aircraft infrared signature (IRS) do not account for dispersion induced by uncertainty on input data, such as aircraft aspect angles and meteorological conditions. As a result, they are of little use to estimate the detection performance of IR optronic systems; in this case, the scenario encompasses a lot of possible situations that must be indeed addressed, but cannot be singly simulated. In this paper, we focus on low-resolution infrared sensors and we propose a methodological approach for predicting simulated IRS dispersion of poorly known aircraft and performing aircraft detection on the resulting set of low-resolution infrared images. It is based on a sensitivity analysis, which identifies inputs that have negligible influence on the computed IRS and can be set at a constant value, on a quasi-Monte Carlo survey of the code output dispersion, and on a new detection test taking advantage of level sets estimation. This method is illustrated in a typical scenario, i.e., a daylight air-to-ground full-frontal attack by a generic combat aircraft flying at low altitude, over a database of 90,000 simulated aircraft images. Assuming a white noise or a fractional Brownian background model, detection performances are very promising.

  19. Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-02-01

    We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.

  20. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  1. Real-time dynamics of lattice gauge theories with a few-qubit quantum computer

    NASA Astrophysics Data System (ADS)

    Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer

    2016-06-01

    Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.

  2. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  3. Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.

    PubMed

    Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A

    2016-03-21

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process

    NASA Astrophysics Data System (ADS)

    Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph

    2012-08-01

    Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.

  5. Runway exit designs for capacity improvement demonstrations. Phase 2: Computer model development

    NASA Technical Reports Server (NTRS)

    Trani, A. A.; Hobeika, A. G.; Kim, B. J.; Nunna, V.; Zhong, C.

    1992-01-01

    The development is described of a computer simulation/optimization model to: (1) estimate the optimal locations of existing and proposed runway turnoffs; and (2) estimate the geometric design requirements associated with newly developed high speed turnoffs. The model described, named REDIM 2.0, represents a stand alone application to be used by airport planners, designers, and researchers alike to estimate optimal turnoff locations. The main procedures are described in detail which are implemented in the software package and possible applications are illustrated when using 6 major runway scenarios. The main output of the computer program is the estimation of the weighted average runway occupancy time for a user defined aircraft population. Also, the location and geometric characteristics of each turnoff are provided to the user.

  6. On multigrid methods for the Navier-Stokes Computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Krist, S. E.; Zang, T. A.

    1988-01-01

    The overall architecture of the multipurpose parallel-processing Navier-Stokes Computer (NSC) being developed by Princeton and NASA Langley (Nosenchuck et al., 1986) is described and illustrated with extensive diagrams, and the NSC implementation of an elementary multigrid algorithm for simulating isotropic turbulence (based on solution of the incompressible time-dependent Navier-Stokes equations with constant viscosity) is characterized in detail. The present NSC design concept calls for 64 nodes, each with the performance of a class VI supercomputer, linked together by a fiber-optic hypercube network and joined to a front-end computer by a global bus. In this configuration, the NSC would have a storage capacity of over 32 Gword and a peak speed of over 40 Gflops. The multigrid Navier-Stokes code discussed would give sustained operation rates of about 25 Gflops.

  7. Ceramic matrix composite behavior -- Computational simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamis, C.C.; Murthy, P.L.N.; Mital, S.K.

    Development of analytical modeling and computational capabilities for the prediction of high temperature ceramic matrix composite behavior has been an ongoing research activity at NASA-Lewis Research Center. These research activities have resulted in the development of micromechanics based methodologies to evaluate different aspects of ceramic matrix composite behavior. The basis of the approach is micromechanics together with a unique fiber substructuring concept. In this new concept the conventional unit cell (the smallest representative volume element of the composite) of micromechanics approach has been modified by substructuring the unit cell into several slices and developing the micromechanics based equations at themore » slice level. Main advantage of this technique is that it can provide a much greater detail in the response of composite behavior as compared to a conventional micromechanics based analysis and still maintains a very high computational efficiency. This methodology has recently been extended to model plain weave ceramic composites. The objective of the present paper is to describe the important features of the modeling and simulation and illustrate with select examples of laminated as well as woven composites.« less

  8. An experiment on the use of disposable plastics as a reinforcement in concrete beams

    NASA Technical Reports Server (NTRS)

    Chowdhury, Mostafiz R.

    1992-01-01

    Illustrated here is the concept of reinforced concrete structures by the use of computer simulation and an inexpensive hands-on design experiment. The students in our construction management program use disposable plastic as a reinforcement to demonstrate their understanding of reinforced concrete and prestressed concrete beams. The plastics used for such an experiment vary from plastic bottles to steel reinforced auto tires. This experiment will show the extent to which plastic reinforcement increases the strength of a concrete beam. The procedure of using such throw-away plastics in an experiment to explain the interaction between the reinforcement material and concrete, and a comparison of the test results for using different types of waste plastics are discussed. A computer analysis to simulate the structural response is used to compare the test results and to understand the analytical background of reinforced concrete design. This interaction of using computers to analyze structures and to relate the output results with real experimentation is found to be a very useful method for teaching a math-based analytical subject to our non-engineering students.

  9. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  10. epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.

    PubMed

    Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa

    2016-12-01

    Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  11. 4D pressure MRI: validation through in-vitro experiments and simulations

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Amili, Omid; Coletti, Filippo

    2017-11-01

    Advances in MRI scan technology and recently developed acquisition sequences have led to the development of 4D flow MRI, a protocol capable of characterizing in-vivo hemodynamics in patients. Thus, the availability of phase-averaged time-resolved three-dimensional blood velocities has opened new opportunities for computing a wide spectrum of totally non-invasive hemodynamic indicators. In this regard, relative pressures play a particularly important role, as they are routinely employed in the clinic to detect cardiovascular abnormalities (e.g., in peripheral artery disease, valve stenosis, hypertension, etc.). In the first part of the talk, we discuss how the relative pressures can be robustly computed through the solution of a pressure Poisson equation and how noise in the velocities affects their estimate. Routine application of these techniques in the clinic, require however a thorough validation on multiple patients/anatomies and systematic comparisons with in-vitro and simulated representations. Thus, the second part of the talk illustrates the use of numerical simulation and in-vitro experimental protocols to validate these indicators with reference to aortic and cerebral vascular anatomies.

  12. Crystallographic Lattice Boltzmann Method

    PubMed Central

    Namburi, Manjusha; Krithivasan, Siddharth; Ansumali, Santosh

    2016-01-01

    Current approaches to Direct Numerical Simulation (DNS) are computationally quite expensive for most realistic scientific and engineering applications of Fluid Dynamics such as automobiles or atmospheric flows. The Lattice Boltzmann Method (LBM), with its simplified kinetic descriptions, has emerged as an important tool for simulating hydrodynamics. In a heterogeneous computing environment, it is often preferred due to its flexibility and better parallel scaling. However, direct simulation of realistic applications, without the use of turbulence models, remains a distant dream even with highly efficient methods such as LBM. In LBM, a fictitious lattice with suitable isotropy in the velocity space is considered to recover Navier-Stokes hydrodynamics in macroscopic limit. The same lattice is mapped onto a cartesian grid for spatial discretization of the kinetic equation. In this paper, we present an inverted argument of the LBM, by making spatial discretization as the central theme. We argue that the optimal spatial discretization for LBM is a Body Centered Cubic (BCC) arrangement of grid points. We illustrate an order-of-magnitude gain in efficiency for LBM and thus a significant progress towards feasibility of DNS for realistic flows. PMID:27251098

  13. Accelerating the design of solar thermal fuel materials through high throughput simulations.

    PubMed

    Liu, Yun; Grossman, Jeffrey C

    2014-12-10

    Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.

  14. Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.

    PubMed

    Gao, J

    2016-01-01

    Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.

  15. Life's attractors : understanding developmental systems through reverse engineering and in silico evolution.

    PubMed

    Jaeger, Johannes; Crombach, Anton

    2012-01-01

    We propose an approach to evolutionary systems biology which is based on reverse engineering of gene regulatory networks and in silico evolutionary simulations. We infer regulatory parameters for gene networks by fitting computational models to quantitative expression data. This allows us to characterize the regulatory structure and dynamical repertoire of evolving gene regulatory networks with a reasonable amount of experimental and computational effort. We use the resulting network models to identify those regulatory interactions that are conserved, and those that have diverged between different species. Moreover, we use the models obtained by data fitting as starting points for simulations of evolutionary transitions between species. These simulations enable us to investigate whether such transitions are random, or whether they show stereotypical series of regulatory changes which depend on the structure and dynamical repertoire of an evolving network. Finally, we present a case study-the gap gene network in dipterans (flies, midges, and mosquitoes)-to illustrate the practical application of the proposed methodology, and to highlight the kind of biological insights that can be gained by this approach.

  16. User's manual for a parameter identification technique. [with options for model simulation for fixed input forcing functions and identification from wind tunnel and flight measurements

    NASA Technical Reports Server (NTRS)

    Kanning, G.

    1975-01-01

    A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.

  17. ATK-ForceField: a new generation molecular dynamics software package

    NASA Astrophysics Data System (ADS)

    Schneider, Julian; Hamaekers, Jan; Chill, Samuel T.; Smidstrup, Søren; Bulin, Johannes; Thesen, Ralph; Blom, Anders; Stokbro, Kurt

    2017-12-01

    ATK-ForceField is a software package for atomistic simulations using classical interatomic potentials. It is implemented as a part of the Atomistix ToolKit (ATK), which is a Python programming environment that makes it easy to create and analyze both standard and highly customized simulations. This paper will focus on the atomic interaction potentials, molecular dynamics, and geometry optimization features of the software, however, many more advanced modeling features are available. The implementation details of these algorithms and their computational performance will be shown. We present three illustrative examples of the types of calculations that are possible with ATK-ForceField: modeling thermal transport properties in a silicon germanium crystal, vapor deposition of selenium molecules on a selenium surface, and a simulation of creep in a copper polycrystal.

  18. Coding considerations for standalone molecular dynamics simulations of atomistic structures

    NASA Astrophysics Data System (ADS)

    Ocaya, R. O.; Terblans, J. J.

    2017-10-01

    The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

  19. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  20. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  1. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  2. Can one trust quantum simulators?

    NASA Astrophysics Data System (ADS)

    Hauke, Philipp; Cucchietti, Fernando M.; Tagliacozzo, Luca; Deutsch, Ivan; Lewenstein, Maciej

    2012-08-01

    Various fundamental phenomena of strongly correlated quantum systems such as high-Tc superconductivity, the fractional quantum-Hall effect and quark confinement are still awaiting a universally accepted explanation. The main obstacle is the computational complexity of solving even the most simplified theoretical models which are designed to capture the relevant quantum correlations of the many-body system of interest. In his seminal 1982 paper (Feynman 1982 Int. J. Theor. Phys. 21 467), Richard Feynman suggested that such models might be solved by ‘simulation’ with a new type of computer whose constituent parts are effectively governed by a desired quantum many-body dynamics. Measurements on this engineered machine, now known as a ‘quantum simulator,’ would reveal some unknown or difficult to compute properties of a model of interest. We argue that a useful quantum simulator must satisfy four conditions: relevance, controllability, reliability and efficiency. We review the current state of the art of digital and analog quantum simulators. Whereas so far the majority of the focus, both theoretically and experimentally, has been on controllability of relevant models, we emphasize here the need for a careful analysis of reliability and efficiency in the presence of imperfections. We discuss how disorder and noise can impact these conditions, and illustrate our concerns with novel numerical simulations of a paradigmatic example: a disordered quantum spin chain governed by the Ising model in a transverse magnetic field. We find that disorder can decrease the reliability of an analog quantum simulator of this model, although large errors in local observables are introduced only for strong levels of disorder. We conclude that the answer to the question ‘Can we trust quantum simulators?’ is … to some extent.

  3. Metabolic flexibility of mitochondrial respiratory chain disorders predicted by computer modelling.

    PubMed

    Zieliński, Łukasz P; Smith, Anthony C; Smith, Alexander G; Robinson, Alan J

    2016-11-01

    Mitochondrial respiratory chain dysfunction causes a variety of life-threatening diseases affecting about 1 in 4300 adults. These diseases are genetically heterogeneous, but have the same outcome; reduced activity of mitochondrial respiratory chain complexes causing decreased ATP production and potentially toxic accumulation of metabolites. Severity and tissue specificity of these effects varies between patients by unknown mechanisms and treatment options are limited. So far most research has focused on the complexes themselves, and the impact on overall cellular metabolism is largely unclear. To illustrate how computer modelling can be used to better understand the potential impact of these disorders and inspire new research directions and treatments, we simulated them using a computer model of human cardiomyocyte mitochondrial metabolism containing over 300 characterised reactions and transport steps with experimental parameters taken from the literature. Overall, simulations were consistent with patient symptoms, supporting their biological and medical significance. These simulations predicted: complex I deficiencies could be compensated using multiple pathways; complex II deficiencies had less metabolic flexibility due to impacting both the TCA cycle and the respiratory chain; and complex III and IV deficiencies caused greatest decreases in ATP production with metabolic consequences that parallel hypoxia. Our study demonstrates how results from computer models can be compared to a clinical phenotype and used as a tool for hypothesis generation for subsequent experimental testing. These simulations can enhance understanding of dysfunctional mitochondrial metabolism and suggest new avenues for research into treatment of mitochondrial disease and other areas of mitochondrial dysfunction. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  4. MD Simulations of P-Type ATPases in a Lipid Bilayer System.

    PubMed

    Autzen, Henriette Elisabeth; Musgaard, Maria

    2016-01-01

    Molecular dynamics (MD) simulation is a computational method which provides insight on protein dynamics with high resolution in both space and time, in contrast to many experimental techniques. MD simulations can be used as a stand-alone method to study P-type ATPases as well as a complementary method aiding experimental studies. In particular, MD simulations have proved valuable in generating and confirming hypotheses relating to the structure and function of P-type ATPases. In the following, we describe a detailed practical procedure on how to set up and run a MD simulation of a P-type ATPase embedded in a lipid bilayer using software free of use for academics. We emphasize general considerations and problems typically encountered when setting up simulations. While full coverage of all possible procedures is beyond the scope of this chapter, we have chosen to illustrate the MD procedure with the Nanoscale Molecular Dynamics (NAMD) and the Visual Molecular Dynamics (VMD) software suites.

  5. The free jet as a simulator of forward velocity effects on jet noise

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.; Tester, B. J.; Tanna, H. K.

    1978-01-01

    A thorough theoretical and experimental study of the effects of the free-jet shear layer on the transmission of sound from a model jet placed within the free jet to the far-field receiver located outside the free-jet flow was conducted. The validity and accuracy of the free-jet flight simulation technique for forward velocity effects on jet noise was evaluated. Transformation charts and a systematic computational procedure for converting measurements from a free-jet simulation to the corresponding results from a wind-tunnel simulation, and, finally, to the flight case were provided. The effects of simulated forward flight on jet mixing noise, internal noise and shock-associated noise from model-scale unheated and heated jets were established experimentally in a free-jet facility. It was illustrated that the existing anomalies between full-scale flight data and model-scale flight simulation data projected to the flight case, could well be due to the contamination of flight data by engine internal noise.

  6. Lower bound on the time complexity of local adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Chen, Zhenghao; Koh, Pang Wei; Zhao, Yan

    2006-11-01

    The adiabatic theorem of quantum physics has been, in recent times, utilized in the design of local search quantum algorithms, and has been proven to be equivalent to standard quantum computation, that is, the use of unitary operators [D. Aharonov in Proceedings of the 45th Annual Symposium on the Foundations of Computer Science, 2004, Rome, Italy (IEEE Computer Society Press, New York, 2004), pp. 42-51]. Hence, the study of the time complexity of adiabatic evolution algorithms gives insight into the computational power of quantum algorithms. In this paper, we present two different approaches of evaluating the time complexity for local adiabatic evolution using time-independent parameters, thus providing effective tests (not requiring the evaluation of the entire time-dependent gap function) for the time complexity of newly developed algorithms. We further illustrate our tests by displaying results from the numerical simulation of some problems, viz. specially modified instances of the Hamming weight problem.

  7. Information Processing Capacity of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-07-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.

  8. A Computational and Experimental Study of Slit Resonators

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Ju, H.; Jones, M. G.; Watson, W. R.; Parrott, T. L.

    2003-01-01

    Computational and experimental studies are carried out to offer validation of the results obtained from direct numerical simulation (DNS) of the flow and acoustic fields of slit resonators. The test cases include slits with 90-degree corners and slits with 45-degree bevel angle housed inside an acoustic impedance tube. Three slit widths are used. Six frequencies from 0.5 to 3.0 kHz are chosen. Good agreement is found between computed and measured reflection factors. In addition, incident sound waves having white noise spectrum and a prescribed pseudo-random noise spectrum are used in subsequent series of tests. The computed broadband results are again found to agree well with experimental data. It is believed the present results provide strong support that DNS can eventually be a useful and accurate prediction tool for liner aeroacoustics. The usage of DNS as a design tool is discussed and illustrated by a simple example.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  10. Information Processing Capacity of Dynamical Systems

    PubMed Central

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-01-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038

  11. The ability of non-computer tasks to increase biomechanical exposure variability in computer-intensive office work.

    PubMed

    Barbieri, Dechristian França; Srinivasan, Divya; Mathiassen, Svend Erik; Nogueira, Helen Cristina; Oliveira, Ana Beatriz

    2015-01-01

    Postures and muscle activity in the upper body were recorded from 50 academics office workers during 2 hours of normal work, categorised by observation into computer work (CW) and three non-computer (NC) tasks (NC seated work, NC standing/walking work and breaks). NC tasks differed significantly in exposures from CW, with standing/walking NC tasks representing the largest contrasts for most of the exposure variables. For the majority of workers, exposure variability was larger in their present job than in CW alone, as measured by the job variance ratio (JVR), i.e. the ratio between min-min variabilities in the job and in CW. Calculations of JVRs for simulated jobs containing different proportions of CW showed that variability could, indeed, be increased by redistributing available tasks, but that substantial increases could only be achieved by introducing more vigorous tasks in the job, in casu illustrated by cleaning.

  12. Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sweby, Peter K.

    1997-01-01

    The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.

  13. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  14. Simulating Thin Sheets: Buckling, Wrinkling, Folding and Growth

    NASA Astrophysics Data System (ADS)

    Vetter, Roman; Stoop, Norbert; Wittel, Falk K.; Herrmann, Hans J.

    2014-03-01

    Numerical simulations of thin sheets undergoing large deformations are computationally challenging. Depending on the scenario, they may spontaneously buckle, wrinkle, fold, or crumple. Nature's thin tissues often experience significant anisotropic growth, which can act as the driving force for such instabilities. We use a recently developed finite element model to simulate the rich variety of nonlinear responses of Kirchhoff-Love sheets. The model uses subdivision surface shape functions in order to guarantee convergence of the method, and to allow a finite element description of anisotropically growing sheets in the classical Rayleigh-Ritz formalism. We illustrate the great potential in this approach by simulating the inflation of airbags, the buckling of a stretched cylinder, as well as the formation and scaling of wrinkles at free boundaries of growing sheets. Finally, we compare the folding of spatially confined sheets subject to growth and shrinking confinement to find that the two processes are equivalent.

  15. Simulation of miniature endplate potentials in neuromuscular junctions by using a cellular automaton

    NASA Astrophysics Data System (ADS)

    Avella, Oscar Javier; Muñoz, José Daniel; Fayad, Ramón

    2008-01-01

    Miniature endplate potentials are recorded in the neuromuscular junction when the acetylcholine contents of one or a few synaptic vesicles are spontaneously released into the synaptic cleft. Since their discovery by Fatt and Katz in 1952, they have been among the paradigms in neuroscience. Those potentials are usually simulated by means of numerical approaches, such as Brownian dynamics, finite differences and finite element methods. Hereby we propose that diffusion cellular automata can be a useful alternative for investigating them. To illustrate this point, we simulate a miniature endplate potential by using experimental parameters. Our model reproduces the potential shape, amplitude and time course. Since our automaton is able to track the history and interactions of each single particle, it is very easy to introduce non-linear effects with little computational effort. This makes cellular automata excellent candidates for simulating biological reaction-diffusion processes, where no other external forces are involved.

  16. Master of Puppets: Cooperative Multitasking for In Situ Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Lukic, Zarija

    2016-01-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less

  17. Henson v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monozov, Dmitriy; Lukie, Zarija

    2016-04-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less

  18. ARC-2012-ACD12-0020-005

    NASA Image and Video Library

    2012-02-10

    Then and Now: These images illustrate the dramatic improvement in NASA computing power over the last 23 years, and its effect on the number of grid points used for flow simulations. At left, an image from the first full-body Navier-Stokes simulation (1988) of an F-16 fighter jet showing pressure on the aircraft body, and fore-body streamlines at Mach 0.90. This steady-state solution took 25 hours using a single Cray X-MP processor to solve the 500,000 grid-point problem. Investigator: Neal Chaderjian, NASA Ames Research Center At right, a 2011 snapshot from a Navier-Stokes simulation of a V-22 Osprey rotorcraft in hover. The blade vortices interact with the smaller turbulent structures. This very detailed simulation used 660 million grid points, and ran on 1536 processors on the Pleiades supercomputer for 180 hours. Investigator: Neal Chaderjian, NASA Ames Research Center; Image: Tim Sandstrom, NASA Ames Research Center

  19. The new program OPAL for molecular dynamics simulations and energy refinements of biological macromolecules.

    PubMed

    Luginbühl, P; Güntert, P; Billeter, M; Wüthrich, K

    1996-09-01

    A new program for molecular dynamics (MD) simulation and energy refinement of biological macromolecules, OPAL, is introduced. Combined with the supporting program TRAJEC for the analysis of MD trajectories, OPAL affords high efficiency and flexibility for work with different force fields, and offers a user-friendly interface and extensive trajectory analysis capabilities. Salient features are computational speeds of up to 1.5 GFlops on vector supercomputers such as the NEC SX-3, ellipsoidal boundaries to reduce the system size for studies in explicit solvents, and natural treatment of the hydrostatic pressure. Practical applications of OPAL are illustrated with MD simulations of pure water, energy minimization of the NMR structure of the mixed disulfide of a mutant E. coli glutaredoxin with glutathione in different solvent models, and MD simulations of a small protein, pheromone Er-2, using either instantaneous or time-averaged NMR restraints, or no restraints.

  20. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.

    2017-12-01

    Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.

  1. AMPS data management concepts. [Atmospheric, Magnetospheric and Plasma in Space experiment

    NASA Technical Reports Server (NTRS)

    Metzelaar, P. N.

    1975-01-01

    Five typical AMPS experiments were formulated to allow simulation studies to verify data management concepts. Design studies were conducted to analyze these experiments in terms of the applicable procedures, data processing and displaying functions. Design concepts for AMPS data management system are presented which permit both automatic repetitive measurement sequences and experimenter-controlled step-by-step procedures. Extensive use is made of a cathode ray tube display, the experimenters' alphanumeric keyboard, and the computer. The types of computer software required by the system and the possible choices of control and display procedures available to the experimenter are described for several examples. An electromagnetic wave transmission experiment illustrates the methods used to analyze data processing requirements.

  2. Automated event generation for loop-induced processes

    DOE PAGES

    Hirschi, Valentin; Mattelaer, Olivier

    2015-10-22

    We present the first fully automated implementation of cross-section computation and event generation for loop-induced processes. This work is integrated in the MadGraph5_aMC@NLO framework. We describe the optimisations implemented at the level of the matrix element evaluation, phase space integration and event generation allowing for the simulation of large multiplicity loop-induced processes. Along with some selected differential observables, we illustrate our results with a table showing inclusive cross-sections for all loop-induced hadronic scattering processes with up to three final states in the SM as well as for some relevant 2 → 4 processes. Furthermore, many of these are computed heremore » for the first time.« less

  3. Evolvable social agents for bacterial systems modeling.

    PubMed

    Paton, Ray; Gregory, Richard; Vlachos, Costas; Saunders, Jon; Wu, Henry

    2004-09-01

    We present two approaches to the individual-based modeling (IbM) of bacterial ecologies and evolution using computational tools. The IbM approach is introduced, and its important complementary role to biosystems modeling is discussed. A fine-grained model of bacterial evolution is then presented that is based on networks of interactivity between computational objects representing genes and proteins. This is followed by a coarser grained agent-based model, which is designed to explore the evolvability of adaptive behavioral strategies in artificial bacteria represented by learning classifier systems. The structure and implementation of the two proposed individual-based bacterial models are discussed, and some results from simulation experiments are presented, illustrating their adaptive properties.

  4. Simulation-based Testing of Control Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Nutaro, James J.; Sanyal, Jibonananda

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulatormore » can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.« less

  5. Qubit Architecture with High Coherence and Fast Tunable Coupling.

    PubMed

    Chen, Yu; Neill, C; Roushan, P; Leung, N; Fang, M; Barends, R; Kelly, J; Campbell, B; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Megrant, A; Mutus, J Y; O'Malley, P J J; Quintana, C M; Sank, D; Vainsencher, A; Wenner, J; White, T C; Geller, Michael R; Cleland, A N; Martinis, John M

    2014-11-28

    We introduce a superconducting qubit architecture that combines high-coherence qubits and tunable qubit-qubit coupling. With the ability to set the coupling to zero, we demonstrate that this architecture is protected from the frequency crowding problems that arise from fixed coupling. More importantly, the coupling can be tuned dynamically with nanosecond resolution, making this architecture a versatile platform with applications ranging from quantum logic gates to quantum simulation. We illustrate the advantages of dynamical coupling by implementing a novel adiabatic controlled-z gate, with a speed approaching that of single-qubit gates. Integrating coherence and scalable control, the introduced qubit architecture provides a promising path towards large-scale quantum computation and simulation.

  6. Simulation of polymer translocation through protein channels

    PubMed Central

    Muthukumar, M.; Kong, C. Y.

    2006-01-01

    A modeling algorithm is presented to compute simultaneously polymer conformations and ionic current, as single polymer molecules undergo translocation through protein channels. The method is based on a combination of Langevin dynamics for coarse-grained models of polymers and the Poisson–Nernst–Planck formalism for ionic current. For the illustrative example of ssDNA passing through the α-hemolysin pore, vivid details of conformational fluctuations of the polymer inside the vestibule and β-barrel compartments of the protein pore, and their consequent effects on the translocation time and extent of blocked ionic current are presented. In addition to yielding insights into several experimentally reported puzzles, our simulations offer experimental strategies to sequence polymers more efficiently. PMID:16567657

  7. Adaptive-Grid Methods for Phase Field Models of Microstructure Development

    NASA Technical Reports Server (NTRS)

    Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.

    1999-01-01

    In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.

  8. Investigation of BPF algorithm in cone-beam CT with 2D general trajectories.

    PubMed

    Zou, Jing; Gui, Jianbao; Rong, Junyan; Hu, Zhanli; Zhang, Qiyang; Xia, Dan

    2012-01-01

    A mathematical derivation was conducted to illustrate that exact 3D image reconstruction could be achieved for z-homogeneous phantoms from data acquired with 2D general trajectories using the back projection filtration (BPF) algorithm. The conclusion was verified by computer simulation and experimental result with a circular scanning trajectory. Furthermore, the effect of the non-uniform degree along z-axis of the phantoms on the accuracy of the 3D reconstruction by BPF algorithm was investigated by numerical simulation with a gradual-phantom and a disk-phantom. The preliminary result showed that the performance of BPF algorithm improved with the z-axis homogeneity of the scanned object.

  9. Integrating surrogate models into subsurface simulation framework allows computation of complex reactive transport scenarios

    NASA Astrophysics Data System (ADS)

    De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael

    2017-04-01

    Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M.: "Flexible Simulation Framework to Couple Processes in Complex 3D Models for Subsurface Utilization Assessment.", Energy Procedia, 97, 2016 p. 494-501.

  10. Identification of gene regulation models from single-cell data

    NASA Astrophysics Data System (ADS)

    Weber, Lisa; Raymond, William; Munsky, Brian

    2018-09-01

    In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.

  11. Simulated Seasonal Spatio-Temporal Patterns of Soil Moisture, Temperature, and Net Radiation in a Deciduous Forest

    NASA Technical Reports Server (NTRS)

    Ballard, Jerrell R., Jr.; Howington, Stacy E.; Cinnella, Pasquale; Smith, James A.

    2011-01-01

    The temperature and moisture regimes in a forest are key components in the forest ecosystem dynamics. Observations and studies indicate that the internal temperature distribution and moisture content of the tree influence not only growth and development, but onset and cessation of cambial activity [1], resistance to insect predation[2], and even affect the population dynamics of the insects [3]. Moreover, temperature directly affects the uptake and metabolism of population from the soil into the tree tissue [4]. Additional studies show that soil and atmospheric temperatures are significant parameters that limit the growth of trees and impose treeline elevation limitation [5]. Directional thermal infrared radiance effects have long been observed in natural backgrounds [6]. In earlier work, we illustrated the use of physically-based models to simulate directional effects in thermal imaging [7-8]. In this paper, we illustrated the use of physically-based models to simulate directional effects in thermal, and net radiation in a adeciduous forest using our recently developed three-dimensional, macro-scale computational tool that simulates the heat and mass transfer interaction in a soil-root-stem systems (SRSS). The SRSS model includes the coupling of existing heat and mass transport tools to stimulate the diurnal internal and external temperatures, internal fluid flow and moisture distribution, and heat flow in the system.

  12. A basis for solid modeling of gear teeth with application in design and manufacture

    NASA Technical Reports Server (NTRS)

    Huston, Ronald L.; Mavriplis, Dimitrios; Oswald, Fred B.; Liu, Yung Sheng

    1992-01-01

    A new approach to modeling gear tooth surfaces is discussed. A computer graphics solid modeling procedure is used to simulate the tooth fabrication process. This procedure is based on the principles of differential geometry that pertain to envelopes of curves and surfaces. The procedure is illustrated with the modeling of spur, helical, bevel, spiral bevel, and hypoid gear teeth. Applications in design and manufacturing are discussed. Extensions to nonstandard tooth forms, to cams, and to rolling element bearings are proposed.

  13. A Basis for Solid Modeling of Gear Teeth with Application in Design and Manufacture

    NASA Technical Reports Server (NTRS)

    Huston, Ronald L.; Mavriplis, Dimitrios; Oswald, Fred B.; Liu, Yung Sheng

    1994-01-01

    This paper discusses a new approach to modeling gear tooth surfaces. A computer graphics solid modeling procedure is used to simulate the tooth fabrication processes. This procedure is based on the principles of differential geometry that pertain to envelopes of curves and surfaces. The procedure is illustrated with the modeling of spur, helical, bevel, spiral bevel and hypoid gear teeth. Applications in design and manufacturing arc discussed. Extensions to nonstandard tooth forms, to cams, and to rolling element hearings are proposed.

  14. A digital controller for variable thrust liquid rocket engines

    NASA Astrophysics Data System (ADS)

    Feng, X.; Zhang, Y. L.; Chen, Q. Z.

    1993-06-01

    The paper describes the design and development of a built-in digital controller (BDC) for the variable thrust liquid rocket engine (VTLRE). Particular attention is given to the function requirements of the BDC, the hardware and software configuration, and the testing process, as well as to the VTLRE real-time computer simulation system used for the development of the BDC. A diagram of the VLTRE control system is presented as well as block diagrams illustrating the hardware and software configuration of the BDC.

  15. In silico cancer modeling: is it ready for primetime?

    PubMed Central

    Deisboeck, Thomas S; Zhang, Le; Yoon, Jeongah; Costa, Jose

    2011-01-01

    SUMMARY At the dawn of the era of personalized, systems-driven medicine, computational or in silico modeling and the simulation of disease processes is becoming increasingly important for hypothesis generation and data integration in both experiment and clinics alike. Arguably, this is nowhere more visible than in oncology. To illustrate the field’s vast potential as well as its current limitations we briefly review selected works on modeling malignant brain tumors. Implications for clinical practice, including trial design and outcome prediction are also discussed. PMID:18852721

  16. Building a Simulation Toolkit for Wireless Mesh Clusters and Evaluating the Suitability of Different Families of Ad Hoc Protocols for the Tactical Network Topology

    DTIC Science & Technology

    2005-03-01

    International Conference On Computers Communications and Networks, 153- 161, Lafayette, L.A. Deitel , H.M. and P.J. Deitel . 2003. C++ How to Program ...of this study is to provide an additional performance evaluation technique for the TNT program of Naval Postgraduate School. The current approach...case are the PAMAS and DBTMA protocols. Toh (2002) illustrates how these approaches succeed in solving the problem. In order to address all the

  17. Computational model of a vector-mediated epidemic

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana Gomes; Dickman, Ronald

    2015-05-01

    We discuss a lattice model of vector-mediated transmission of a disease to illustrate how simulations can be applied in epidemiology. The population consists of two species, human hosts and vectors, which contract the disease from one another. Hosts are sedentary, while vectors (mosquitoes) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied.

  18. Telescoping Mechanics: A New Paradigm for Composite Behavior Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Murthy, P. L. N.; Gotsis, P. K.; Mital. S. K.

    2004-01-01

    This report reviews the application of telescoping mechanics to composites using recursive laminate theory. The elemental scale is the fiber-matrix slice, the behavior of which propagates to laminate. The results from using applications for typical, hybrid, and smart composites and composite-enhanced reinforced concrete structures illustrate the versatility and generality of telescoping scale mechanics. Comparisons with approximate, single-cell, and two- and three-dimensional finite-element methods demonstrate the accuracy and computational effectiveness of telescoping scale mechanics for predicting complex composite behavior.

  19. Computer modeling of test particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Jun Hyung; Lee, Soo bin; Hodge, Bri-Mathias

    The energy system of process industry are faced with a new unprecedented challenge. Renewable energies should be incorporated but single of them cannot meet its energy demand of high degree and a large quantity. This paper investigates a simulation framework to compute the capacity of multiple energy sources including solar, wind power, diesel and batteries. The framework involves actual renewable energy supply and demand profile generation and supply demand matching. Eight configurations of different supply options are evaluated to illustrate the applicability of the proposed framework with some remarks.

  1. Theoretical and experimental analysis of the impacts of removable storage media and antivirus software on viral spread

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan

    2015-05-01

    In this paper, a new computer virus propagation model, which incorporates the effects of removable storage media and antivirus software, is proposed and analyzed. The global stability of the unique equilibrium of the model is independent of system parameters. Numerical simulations not only verify this result, but also illustrate the influences of removable storage media and antivirus software on viral spread. On this basis, some applicable measures for suppressing virus prevalence are suggested.

  2. GM(1,N) method for the prediction of anaerobic digestion system and sensitivity analysis of influential factors.

    PubMed

    Ren, Jingzheng

    2018-01-01

    Anaerobic digestion process has been recognized as a promising way for waste treatment and energy recovery in a sustainable way. Modelling of anaerobic digestion system is significantly important for effectively and accurately controlling, adjusting, and predicting the system for higher methane yield. The GM(1,N) approach which does not need the mechanism or a large number of samples was employed to model the anaerobic digestion system to predict methane yield. In order to illustrate the proposed model, an illustrative case about anaerobic digestion of municipal solid waste for methane yield was studied, and the results demonstrate that GM(1,N) model can effectively simulate anaerobic digestion system at the cases of poor information with less computational expense. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The Layer-Oriented Approach to Declarative Languages for Biological Modeling

    PubMed Central

    Raikov, Ivan; De Schutter, Erik

    2012-01-01

    We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language. PMID:22615554

  4. The layer-oriented approach to declarative languages for biological modeling.

    PubMed

    Raikov, Ivan; De Schutter, Erik

    2012-01-01

    We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, Andrew, E-mail: a.duncan@imperial.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk; Zygalakis, Konstantinos, E-mail: k.zygalakis@ed.ac.uk

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when onemore » or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.« less

  6. The Landlab v1.0 OverlandFlow component: a Python tool for computing shallow-water flow across watersheds

    NASA Astrophysics Data System (ADS)

    Adams, Jordan M.; Gasparini, Nicole M.; Hobley, Daniel E. J.; Tucker, Gregory E.; Hutton, Eric W. H.; Nudurupati, Sai S.; Istanbulluoglu, Erkan

    2017-04-01

    Representation of flowing water in landscape evolution models (LEMs) is often simplified compared to hydrodynamic models, as LEMs make assumptions reducing physical complexity in favor of computational efficiency. The Landlab modeling framework can be used to bridge the divide between complex runoff models and more traditional LEMs, creating a new type of framework not commonly used in the geomorphology or hydrology communities. Landlab is a Python-language library that includes tools and process components that can be used to create models of Earth-surface dynamics over a range of temporal and spatial scales. The Landlab OverlandFlow component is based on a simplified inertial approximation of the shallow water equations, following the solution of de Almeida et al.(2012). This explicit two-dimensional hydrodynamic algorithm simulates a flood wave across a model domain, where water discharge and flow depth are calculated at all locations within a structured (raster) grid. Here, we illustrate how the OverlandFlow component contained within Landlab can be applied as a simplified event-based runoff model and how to couple the runoff model with an incision model operating on decadal timescales. Examples of flow routing on both real and synthetic landscapes are shown. Hydrographs from a single storm at multiple locations in the Spring Creek watershed, Colorado, USA, are illustrated, along with a map of shear stress applied on the land surface by flowing water. The OverlandFlow component can also be coupled with the Landlab DetachmentLtdErosion component to illustrate how the non-steady flow routing regime impacts incision across a watershed. The hydrograph and incision results are compared to simulations driven by steady-state runoff. Results from the coupled runoff and incision model indicate that runoff dynamics can impact landscape relief and channel concavity, suggesting that, on landscape evolution timescales, the OverlandFlow model may lead to differences in simulated topography in comparison with traditional methods. The exploratory test cases described within demonstrate how the OverlandFlow component can be used in both hydrologic and geomorphic applications.

  7. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

    2017-12-01

    Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.

  8. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    PubMed Central

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  9. Python as a federation tool for GENESIS 3.0.

    PubMed

    Cornelis, Hugo; Rodriguez, Armando L; Coop, Allan D; Bower, James M

    2012-01-01

    The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be 'glued' together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience.

  10. Python as a Federation Tool for GENESIS 3.0

    PubMed Central

    Cornelis, Hugo; Rodriguez, Armando L.; Coop, Allan D.; Bower, James M.

    2012-01-01

    The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be ‘glued’ together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience. PMID:22276101

  11. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacón, Enrique, E-mail: echacon@icmm.csic.es; Tarazona, Pedro, E-mail: pedro.tarazona@uam.es; Bresme, Fernando, E-mail: f.bresme@imperial.ac.uk

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributionsmore » related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.« less

  12. Optimal control strategy for a novel computer virus propagation model on scale-free networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chunming; Huang, Haitao

    2016-06-01

    This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.

  13. A novel strategy for load balancing of distributed medical applications.

    PubMed

    Logeswaran, Rajasvaran; Chen, Li-Choo

    2012-04-01

    Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.

  14. Image processing of aerodynamic data

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1985-01-01

    The use of digital image processing techniques in analyzing and evaluating aerodynamic data is discussed. An image processing system that converts images derived from digital data or from transparent film into black and white, full color, or false color pictures is described. Applications to black and white images of a model wing with a NACA 64-210 section in simulated rain and to computed low properties for transonic flow past a NACA 0012 airfoil are presented. Image processing techniques are used to visualize the variations of water film thicknesses on the wing model and to illustrate the contours of computed Mach numbers for the flow past the NACA 0012 airfoil. Since the computed data for the NACA 0012 airfoil are available only at discrete spatial locations, an interpolation method is used to provide values of the Mach number over the entire field.

  15. Control aspects of quantum computing using pure and mixed states.

    PubMed

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J

    2012-10-13

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems.

  16. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    NASA Technical Reports Server (NTRS)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  17. Control aspects of quantum computing using pure and mixed states

    PubMed Central

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.

    2012-01-01

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034

  18. Keno-21: Fundamental Issues in the Design of Geophysical Simulation Experiments and Resource Allocation in Climate Modelling

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2001-05-01

    Many sources of uncertainty come into play when modelling geophysical systems by simulation. These include uncertainty in the initial condition, uncertainty in model parameter values (and the parameterisations themselves) and error in the model class from which the model(s) was selected. In recent decades, climate simulations have focused resources on reducing the last of these by including more and more details into the model. One can question when this ``kitchen sink'' approach should be complimented with realistic estimates of the impact from other uncertainties noted above. Indeed while the impact of model error can never be fully quantified, as all simulation experiments are interpreted a the rosy scenario which assumes a priori that nothing crucial is missing, the impact of other uncertainties can be quantified at only the cost of computational power; as illustrated, for example, in ensemble climate modelling experiments like Casino-21. This talk illustrates the interplay uncertainties in the context of a trivial nonlinear system and an ensemble of models. The simple systems considered in this small scale experiment, Keno-21, are meant to illustrate issues of experimental design; they are not intended to provide true climate simulations. The use of simulation models with huge numbers of parameters given limited data is usually justified by an appeal to the Laws of Physics: the number of free degrees-of-freedom are many fewer than the number of variables; both variables, parameterisations, and parameter values are constrained by ``the physics" and the resulting simulation yields a realistic reproduction of the entire planet's climate system to within reasonable bounds. But what bounds? exactly? In a single model run under transient forcing scenario, there are good statistical grounds for considering only large space and time averages; most of these reasons vanish if an ensemble of runs are made. Ensemble runs can quantify the (in)ability of a model to provide insight on regional changes: if a model cannot capture regional variations in the data on which the model was constructed (that is, in-sample) claims that out-of-sample predictions of those same regional averages should be used in policy making are vacuous. While motivated by climate modelling and illustrated on a trivial nonlinear system, these issues have implications across the range of geophysical modelling. These include implications for appropriate resource allocation, on the making of science policy, and on the public understanding of science and the role of uncertainty in decision making.

  19. A simulation exercise of a cavity-type solar receiver using the HEAP program

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1979-01-01

    A computer program has been developed at JPL to support the advanced studies of solar receivers in high concentration solar-thermal-electric power plants. This work presents briefly the program methodology, input data required, expected output results, capabilities and limitations. The program was used to simulate an existing 5 kwt experimental receiver of a cavity type. The receiver is located at the focus of a paraboloid dish and is connected to a Stirling engine. Both steady state and transient performance simulation were given. Details about the receiver modeling were also presented to illustrate the procedure followed. Simulated temperature patterns were found in good agreement with test data obtained by high temperature thermocouples. The simulated receiver performance was extrapolated to various operating conditions not attained experimentally. The results of the parameterization study were fitted to a general performance expression to determine the receiver characteristic constraints. The latter were used to optimize the receiver operating conditions to obtain the highest overall conversion efficiency.

  20. Is an intuitive convergence definition of molecular dynamics simulations solely based on the root mean square deviation possible?

    PubMed

    Knapp, B; Frantal, S; Cibena, M; Schreiner, W; Bauer, P

    2011-08-01

    Molecular dynamics is a commonly used technique in computational biology. One key issue of each molecular dynamics simulation is: When does this simulation reach equilibrium state? A widely used way to determine this is the visual and intuitive inspection of root mean square deviation (RMSD) plots of the simulation. Although this technique has been criticized several times, it is still often used. Therefore, we present a study proving that this method is not reliable at all. We conducted a survey with participants from the field in which we illustrated different RMSD plots to scientists in the field of molecular dynamics. These plots were randomized and repeated, using a statistical model and different variants of the plots. We show that there is no mutual consent about the point of equilibrium. The decisions are severely biased by different parameters. Therefore, we conclude that scientists should not discuss the equilibration of a molecular dynamics simulation on the basis of a RMSD plot.

  1. CFD Approaches for Simulation of Wing-Body Stage Separation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Gomez, Reynaldo J.; Scallion, William I.

    2004-01-01

    A collection of computational fluid dynamics tools and techniques are being developed and tested for application to stage separation and abort simulation for next-generation launch vehicles. In this work, an overset grid Navier-Stokes flow solver has been enhanced and demonstrated on a matrix of proximity cases and on a dynamic separation simulation of a belly-to-belly wing-body configuration. Steady cases show excellent agreement between Navier-Stokes results, Cartesian grid Euler solutions, and wind tunnel data at Mach 3. Good agreement has been obtained between Navier-Stokes, Euler, and wind tunnel results at Mach 6. An analysis of a dynamic separation at Mach 3 demonstrates that unsteady aerodynamic effects are not important for this scenario. Results provide an illustration of the relative applicability of Euler and Navier-Stokes methods to these types of problems.

  2. Influence of microscale heterogeneity and microstructure on the tensile behavior of crystalline rocks

    NASA Astrophysics Data System (ADS)

    Mahabadi, O. K.; Tatone, B. S. A.; Grasselli, G.

    2014-07-01

    This study investigates the influence of microscale heterogeneity and microcracks on the failure behavior and mechanical response of a crystalline rock. The thin section analysis for obtaining the microcrack density is presented. Using micro X-ray computed tomography (μCT) scanning of failed laboratory specimens, the influence of heterogeneity and, in particular, biotite grains on the brittle fracture of the specimens is discussed and various failure patterns are characterized. Three groups of numerical simulations are presented, which demonstrate the role of microcracks and the influence of μCT-based and stochastically generated phase distributions. The mechanical response, stress distribution, and fracturing process obtained by the numerical simulations are also discussed. The simulation results illustrate that heterogeneity and microcracks should be considered to accurately predict the tensile strength and failure behavior of the sample.

  3. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    NASA Astrophysics Data System (ADS)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  4. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  5. Vortex generation and mixing in three-dimensional supersonic combustors

    NASA Technical Reports Server (NTRS)

    Riggins, D. W.; Vitt, P. H.

    1993-01-01

    The generation and evolution of the flow vorticity established by instream injector ramps in a high Mach number/high enthalpy scramjet combustor flow-field are described in detail for a number of computational cases. Classical fluid dynamic circulation is presented for these cases in order to clarify the spatial distribution and convection of the vorticity. The ability of the simulations to accurately represent Stokes Law of circulation is discussed and shown. In addition, the conservation of swirl (effectively the moment-of-momentum theorem) is presented for these flows. The impact of both turbulent diffusion and the vortex/ramp non-uniformity on the downstream mixing rate is clearly illustrated. A correlation over the length of the combustor between fuel-air mixing and a parameter called the vortex stirring length is demonstrated. Finally, computational results for a representative ramp injector are compared with experimental data. Influence of the stream vorticity on the effective turbulent Prandtl number used in the simulation is discussed.

  6. Investigation for Molecular Attraction Impact Between Contacting Surfaces in Micro-Gears

    NASA Astrophysics Data System (ADS)

    Yang, Ping; Li, Xialong; Zhao, Yanfang; Yang, Haiying; Wang, Shuting; Yang, Jianming

    2013-10-01

    The aim of this research work is to provide a systematic method to perform molecular attraction impact between contacting surfaces in micro-gear train. This method is established by integrating involute profile analysis and molecular dynamics simulation. A mathematical computation of micro-gear involute is presented based on geometrical properties, Taylor expression and Hamaker assumption. In the meantime, Morse potential function and the cut-off radius are introduced with a molecular dynamics simulation. So a hybrid computational method for the Van Der Waals force between the contacting faces in micro-gear train is developed. An example is illustrated to show the performance of this method. The results show that the change of Van Der Waals force in micro-gear train has a nonlinear characteristic with parameters change such as the modulus of the gear and the tooth number of gear etc. The procedure implies a potential feasibility that we can control the Van Der Waals force by adjusting the manufacturing parameters for gear train design.

  7. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography.Our derivation, which is based on the rate-summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees.more » This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  8. Combining ray tracing and CFD in the thermal analysis of a parabolic dish tubular cavity receiver

    NASA Astrophysics Data System (ADS)

    Craig, Ken J.; Marsberg, Justin; Meyer, Josua P.

    2016-05-01

    This paper describes the numerical evaluation of a tubular receiver used in a dish Brayton cycle. In previous work considering the use of Computational Fluid Dynamics (CFD) to perform the calculation of the absorbed radiation from the parabolic dish into the cavity as well as the resulting conjugate heat transfer, it was shown that an axi-symmetric model of the dish and receiver absorbing surfaces was useful in reducing the computational cost required for a full 3-D discrete ordinates solution, but concerns remained about its accuracy. To increase the accuracy, the Monte Carlo ray tracer SolTrace is used to perform the calculation of the absorbed radiation profile to be used in the conjugate heat transfer CFD simulation. The paper describes an approach for incorporating a complex geometry like a tubular receiver generated using CFD software into SolTrace. The results illustrate the variation of CFD mesh density that translates into the number of elements in SolTrace as well as the number of rays used in the Monte Carlo approach and their effect on obtaining a resolution-independent solution. The conjugate heat transfer CFD simulation illustrates the effect of applying the SolTrace surface heat flux profile solution as a volumetric heat source to heat up the air inside the tube. Heat losses due to convection and thermal re-radiation are also determined as a function of different tube absorptivities.

  9. Asynchronous Replica Exchange Software for Grid and Heterogeneous Computing.

    PubMed

    Gallicchio, Emilio; Xia, Junchao; Flynn, William F; Zhang, Baofeng; Samlalsingh, Sade; Mentes, Ahmet; Levy, Ronald M

    2015-11-01

    Parallel replica exchange sampling is an extended ensemble technique often used to accelerate the exploration of the conformational ensemble of atomistic molecular simulations of chemical systems. Inter-process communication and coordination requirements have historically discouraged the deployment of replica exchange on distributed and heterogeneous resources. Here we describe the architecture of a software (named ASyncRE) for performing asynchronous replica exchange molecular simulations on volunteered computing grids and heterogeneous high performance clusters. The asynchronous replica exchange algorithm on which the software is based avoids centralized synchronization steps and the need for direct communication between remote processes. It allows molecular dynamics threads to progress at different rates and enables parameter exchanges among arbitrary sets of replicas independently from other replicas. ASyncRE is written in Python following a modular design conducive to extensions to various replica exchange schemes and molecular dynamics engines. Applications of the software for the modeling of association equilibria of supramolecular and macromolecular complexes on BOINC campus computational grids and on the CPU/MIC heterogeneous hardware of the XSEDE Stampede supercomputer are illustrated. They show the ability of ASyncRE to utilize large grids of desktop computers running the Windows, MacOS, and/or Linux operating systems as well as collections of high performance heterogeneous hardware devices.

  10. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less

  11. Understanding G Protein-Coupled Receptor Allostery via Molecular Dynamics Simulations: Implications for Drug Discovery.

    PubMed

    Basith, Shaherin; Lee, Yoonji; Choi, Sun

    2018-01-01

    Unraveling the mystery of protein allostery has been one of the greatest challenges in both structural and computational biology. However, recent advances in computational methods, particularly molecular dynamics (MD) simulations, have led to its utility as a powerful and popular tool for the study of protein allostery. By capturing the motions of a protein's constituent atoms, simulations can enable the discovery of allosteric hot spots and the determination of the mechanistic basis for allostery. These structural and dynamic studies can provide a foundation for a wide range of applications, including rational drug design and protein engineering. In our laboratory, the use of MD simulations and network analysis assisted in the elucidation of the allosteric hotspots and intracellular signal transduction of G protein-coupled receptors (GPCRs), primarily on one of the adenosine receptor subtypes, A 2A adenosine receptor (A 2A AR). In this chapter, we describe a method for calculating the map of allosteric signal flow in different GPCR conformational states and illustrate how these concepts have been utilized in understanding the mechanism of GPCR allostery. These structural studies will provide valuable insights into the allosteric and orthosteric modulations that would be of great help to design novel drugs targeting GPCRs in pathological states.

  12. Finite element analysis simulations for ultrasonic array NDE inspections

    NASA Astrophysics Data System (ADS)

    Dobson, Jeff; Tweedie, Andrew; Harvey, Gerald; O'Leary, Richard; Mulholland, Anthony; Tant, Katherine; Gachagan, Anthony

    2016-02-01

    Advances in manufacturing techniques and materials have led to an increase in the demand for reliable and robust inspection techniques to maintain safety critical features. The application of modelling methods to develop and evaluate inspections is becoming an essential tool for the NDE community. Current analytical methods are inadequate for simulation of arbitrary components and heterogeneous materials, such as anisotropic welds or composite structures. Finite element analysis software (FEA), such as PZFlex, can provide the ability to simulate the inspection of these arrangements, providing the ability to economically prototype and evaluate improved NDE methods. FEA is often seen as computationally expensive for ultrasound problems however, advances in computing power have made it a more viable tool. This paper aims to illustrate the capability of appropriate FEA to produce accurate simulations of ultrasonic array inspections - minimizing the requirement for expensive test-piece fabrication. Validation is afforded via corroboration of the FE derived and experimentally generated data sets for a test-block comprising 1D and 2D defects. The modelling approach is extended to consider the more troublesome aspects of heterogeneous materials where defect dimensions can be of the same length scale as the grain structure. The model is used to facilitate the implementation of new ultrasonic array inspection methods for such materials. This is exemplified by considering the simulation of ultrasonic NDE in a weld structure in order to assess new approaches to imaging such structures.

  13. NETIMIS: Dynamic Simulation of Health Economics Outcomes Using Big Data.

    PubMed

    Johnson, Owen A; Hall, Peter S; Hulme, Claire

    2016-02-01

    Many healthcare organizations are now making good use of electronic health record (EHR) systems to record clinical information about their patients and the details of their healthcare. Electronic data in EHRs is generated by people engaged in complex processes within complex environments, and their human input, albeit shaped by computer systems, is compromised by many human factors. These data are potentially valuable to health economists and outcomes researchers but are sufficiently large and complex enough to be considered part of the new frontier of 'big data'. This paper describes emerging methods that draw together data mining, process modelling, activity-based costing and dynamic simulation models. Our research infrastructure includes safe links to Leeds hospital's EHRs with 3 million secondary and tertiary care patients. We created a multidisciplinary team of health economists, clinical specialists, and data and computer scientists, and developed a dynamic simulation tool called NETIMIS (Network Tools for Intervention Modelling with Intelligent Simulation; http://www.netimis.com ) suitable for visualization of both human-designed and data-mined processes which can then be used for 'what-if' analysis by stakeholders interested in costing, designing and evaluating healthcare interventions. We present two examples of model development to illustrate how dynamic simulation can be informed by big data from an EHR. We found the tool provided a focal point for multidisciplinary team work to help them iteratively and collaboratively 'deep dive' into big data.

  14. Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.

    PubMed

    Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H

    2006-11-15

    The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.

  15. Impact of pharmacy automation on patient waiting time: an application of computer simulation.

    PubMed

    Tan, Woan Shin; Chua, Siang Li; Yong, Keng Woh; Wu, Tuck Seng

    2009-06-01

    This paper aims to illustrate the use of computer simulation in evaluating the impact of a prototype automated dispensing system on waiting time in an outpatient pharmacy and its potential as a routine tool in pharmacy management. A discrete event simulation model was developed to investigate the impact of a prototype automated dispensing system on operational efficiency and service standards in an outpatient pharmacy. The simulation results suggest that automating the prescription-filing function using a prototype that picks and packs at 20 seconds per item will not assist the pharmacy in achieving the waiting time target of 30 minutes for all patients. Regardless of the state of automation, to meet the waiting time target, 2 additional pharmacists are needed to overcome the process bottleneck at the point of medication dispense. However, if the automated dispensing is the preferred option, the speed of the system needs to be twice as fast as the current configuration to facilitate the reduction of the 95th percentile patient waiting time to below 30 minutes. The faster processing speed will concomitantly allow the pharmacy to reduce the number of pharmacy technicians from 11 to 8. Simulation was found to be a useful and low cost method that allows an otherwise expensive and resource intensive evaluation of new work processes and technology to be completed within a short time.

  16. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    NASA Astrophysics Data System (ADS)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Do, Hainam, E-mail: h.do@nottingham.ac.uk, E-mail: richard.wheatley@nottingham.ac.uk; Wheatley, Richard J., E-mail: h.do@nottingham.ac.uk, E-mail: richard.wheatley@nottingham.ac.uk

    A robust and model free Monte Carlo simulation method is proposed to address the challenge in computing the classical density of states and partition function of solids. Starting from the minimum configurational energy, the algorithm partitions the entire energy range in the increasing energy direction (“upward”) into subdivisions whose integrated density of states is known. When combined with the density of states computed from the “downward” energy partitioning approach [H. Do, J. D. Hirst, and R. J. Wheatley, J. Chem. Phys. 135, 174105 (2011)], the equilibrium thermodynamic properties can be evaluated at any temperature and in any phase. The methodmore » is illustrated in the context of the Lennard-Jones system and can readily be extended to other molecular systems and clusters for which the structures are known.« less

  18. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  19. Hyper-Systolic Processing on APE100/QUADRICS:. n2-LOOP Computations

    NASA Astrophysics Data System (ADS)

    Lippert, Thomas; Ritzenhöfer, Gero; Glaessner, Uwe; Hoeber, Henning; Seyfried, Armin; Schilling, Klaus

    We investigate the performance gains from hyper-systolic implementations of n2-loop problems on the massively parallel computer Quadrics, exploiting its three-dimensional interprocessor connectivity. For illustration we study the communication aspects of an exact molecular dynamics simulation of n particles with Coulomb (or gravitational) interactions. We compare the interprocessor communication costs of the standard-systolic and the hyper-systolic approaches for various granularities. We predict gain factors as large as three on the Q4 and eight on the QH4 and measure actual performances on these machine configurations. We conclude that it appears feasible to investigate the thermodynamics of a full gravitating n-body problem with O(16.000) particles using the new method on a QH4 system.

  20. Application of CFD codes to the design and development of propulsion systems

    NASA Technical Reports Server (NTRS)

    Lord, W. K.; Pickett, G. F.; Sturgess, G. J.; Weingold, H. D.

    1987-01-01

    The internal flows of aerospace propulsion engines have certain common features that are amenable to analysis through Computational Fluid Dynamics (CFD) computer codes. Although the application of CFD to engineering problems in engines was delayed by the complexities associated with internal flows, many codes with different capabilities are now being used as routine design tools. This is illustrated by examples taken from the aircraft gas turbine engine of flows calculated with potential flow, Euler flow, parabolized Navier-Stokes, and Navier-Stokes codes. Likely future directions of CFD applied to engine flows are described, and current barriers to continued progress are highlighted. The potential importance of the Numerical Aerodynamic Simulator (NAS) to resolution of these difficulties is suggested.

  1. How to decompose arbitrary continuous-variable quantum operations.

    PubMed

    Sefi, Seckin; van Loock, Peter

    2011-10-21

    We present a general, systematic, and efficient method for decomposing any given exponential operator of bosonic mode operators, describing an arbitrary multimode Hamiltonian evolution, into a set of universal unitary gates. Although our approach is mainly oriented towards continuous-variable quantum computation, it may be used more generally whenever quantum states are to be transformed deterministically, e.g., in quantum control, discrete-variable quantum computation, or Hamiltonian simulation. We illustrate our scheme by presenting decompositions for various nonlinear Hamiltonians including quartic Kerr interactions. Finally, we conclude with two potential experiments utilizing offline-prepared optical cubic states and homodyne detections, in which quantum information is processed optically or in an atomic memory using quadratic light-atom interactions. © 2011 American Physical Society

  2. Stochastic Analysis of Reaction–Diffusion Processes

    PubMed Central

    Hu, Jifeng; Kang, Hye-Won

    2013-01-01

    Reaction and diffusion processes are used to model chemical and biological processes over a wide range of spatial and temporal scales. Several routes to the diffusion process at various levels of description in time and space are discussed and the master equation for spatially discretized systems involving reaction and diffusion is developed. We discuss an estimator for the appropriate compartment size for simulating reaction–diffusion systems and introduce a measure of fluctuations in a discretized system. We then describe a new computational algorithm for implementing a modified Gillespie method for compartmental systems in which reactions are aggregated into equivalence classes and computational cells are searched via an optimized tree structure. Finally, we discuss several examples that illustrate the issues that have to be addressed in general systems. PMID:23719732

  3. Hemispherical reflectance model for passive images in an outdoor environment.

    PubMed

    Kim, Charles C; Thai, Bea; Yamaoka, Neil; Aboutalib, Omar

    2015-05-01

    We present a hemispherical reflectance model for simulating passive images in an outdoor environment where illumination is provided by natural sources such as the sun and the clouds. While the bidirectional reflectance distribution function (BRDF) accurately produces radiance from any objects after the illumination, using the BRDF in calculating radiance requires double integration. Replacing the BRDF by hemispherical reflectance under the natural sources transforms the double integration into a multiplication. This reduces both storage space and computation time. We present the formalism for the radiance of the scene using hemispherical reflectance instead of BRDF. This enables us to generate passive images in an outdoor environment taking advantage of the computational and storage efficiencies. We show some examples for illustration.

  4. Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions

    PubMed Central

    Liu, Weidong; Luo, Xi

    2014-01-01

    This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463

  5. Biological materials by design.

    PubMed

    Qin, Zhao; Dimas, Leon; Adler, David; Bratzel, Graham; Buehler, Markus J

    2014-02-19

    In this topical review we discuss recent advances in the use of physical insight into the way biological materials function, to design novel engineered materials 'from scratch', or from the level of fundamental building blocks upwards and by using computational multiscale methods that link chemistry to material function. We present studies that connect advances in multiscale hierarchical material structuring with material synthesis and testing, review case studies of wood and other biological materials, and illustrate how engineered fiber composites and bulk materials are designed, modeled, and then synthesized and tested experimentally. The integration of experiment and simulation in multiscale design opens new avenues to explore the physics of materials from a fundamental perspective, and using complementary strengths from models and empirical techniques. Recent developments in this field illustrate a new paradigm by which complex material functionality is achieved through hierarchical structuring in spite of simple material constituents.

  6. Simulations & Measurements of Airframe Noise: A BANC Workshops Perspective

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Lockard, David

    2016-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate computational fluid dynamics, computational aeroacoustics, and in depth measurements targeting a selected set of canonical yet realistic configurations that advance the current state-of-the-art in multiple respects. Unique features of the BANC Workshops include: intrinsically multi-disciplinary focus involving both fluid dynamics and aeroacoustics, holistic rather than predictive emphasis, concurrent, long term evolution of experiments and simulations with a powerful interplay between the two, and strongly integrative nature by virtue of multi-team, multi-facility, multiple-entry measurements. This paper illustrates these features in the context of the BANC problem categories and outlines some of the challenges involved and how they were addressed. A brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far is also included.

  7. Density functional theory in the solid state

    PubMed Central

    Hasnip, Philip J.; Refson, Keith; Probert, Matt I. J.; Yates, Jonathan R.; Clark, Stewart J.; Pickard, Chris J.

    2014-01-01

    Density functional theory (DFT) has been used in many fields of the physical sciences, but none so successfully as in the solid state. From its origins in condensed matter physics, it has expanded into materials science, high-pressure physics and mineralogy, solid-state chemistry and more, powering entire computational subdisciplines. Modern DFT simulation codes can calculate a vast range of structural, chemical, optical, spectroscopic, elastic, vibrational and thermodynamic phenomena. The ability to predict structure–property relationships has revolutionized experimental fields, such as vibrational and solid-state NMR spectroscopy, where it is the primary method to analyse and interpret experimental spectra. In semiconductor physics, great progress has been made in the electronic structure of bulk and defect states despite the severe challenges presented by the description of excited states. Studies are no longer restricted to known crystallographic structures. DFT is increasingly used as an exploratory tool for materials discovery and computational experiments, culminating in ex nihilo crystal structure prediction, which addresses the long-standing difficult problem of how to predict crystal structure polymorphs from nothing but a specified chemical composition. We present an overview of the capabilities of solid-state DFT simulations in all of these topics, illustrated with recent examples using the CASTEP computer program. PMID:24516184

  8. Target-type probability combining algorithms for multisensor tracking

    NASA Astrophysics Data System (ADS)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  9. Drag Reduction of an Airfoil Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Jiang, Chiyu; Sun, Anzhu; Marcus, Philip

    2017-11-01

    We reduced the drag of a 2D airfoil by starting with a NACA-0012 airfoil and used deep learning methods. We created a database which consists of simulations of 2D external flow over randomly generated shapes. We then developed a machine learning framework for external flow field inference given input shapes. Past work which utilized machine learning in Computational Fluid Dynamics focused on estimations of specific flow parameters, but this work is novel in the inference of entire flow fields. We further showed that learned flow patterns are transferable to cases that share certain similarities. This study illustrates the prospects of deeper integration of data-based modeling into current CFD simulation frameworks for faster flow inference and more accurate flow modeling.

  10. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. A Parametric k-Means Algorithm

    PubMed Central

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  12. CFD Modeling of a CFB Riser Using Improved Inlet Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Peng, B. T.; Zhang, C.; Zhu, J. X.; Qi, X. B.

    2010-03-01

    A computational fluid dynamics (CFD) model based on Eulerian-Eulerian approach coupled with granular kinetics theory was adopted to investigate the hydrodynamics and flow structures in a circulating fluidized bed (CFB) riser column. A new approach to specify the inlet boundary conditions was proposed in this study to simulate gas-solids flow in CFB risers more accurately. Simulation results were compared with the experimental data, and good agreement between the numerical results and experimental data was observed under different operating conditions, which indicates the effectiveness and accuracy of the CFD model with the proposed inlet boundary conditions. The results also illustrate a clear core annulus structure in the CFB riser under all operating conditions both experimentally and numerically.

  13. Simulation on Natural Convection of a Nanofluid along an Isothermal Inclined Plate

    NASA Astrophysics Data System (ADS)

    Mitra, Asish

    2017-08-01

    A numerical algorithm is presented for studying laminar natural convection flow of a nanofluid along an isothermal inclined plate. By means of similarity transformation, the original nonlinear partial differential equations of flow are transformed to a set of nonlinear ordinary differential equations. Subsequently they are reduced to a first order system and integrated using Newton Raphson and adaptive Runge-Kutta methods. The computer codes are developed for this numerical analysis in Matlab environment. Dimensionless velocity, temperature profiles and nanoparticle concentration for various angles of inclination are illustrated graphically. The effects of Prandtl number, Brownian motion parameter and thermophoresis parameter on Nusselt number are also discussed. The results of the present simulation are then compared with previous one available in literature with good agreement.

  14. A Multi-Agent Approach to the Simulation of Robotized Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Foit, K.; Gwiazda, A.; Banaś, W.

    2016-08-01

    The recent years of eventful industry development, brought many competing products, addressed to the same market segment. The shortening of a development cycle became a necessity if the company would like to be competitive. Because of switching to the Intelligent Manufacturing model the industry search for new scheduling algorithms, while the traditional ones do not meet the current requirements. The agent-based approach has been considered by many researchers as an important way of evolution of modern manufacturing systems. Due to the properties of the multi-agent systems, this methodology is very helpful during creation of the model of production system, allowing depicting both processing and informational part. The complexity of such approach makes the analysis impossible without the computer assistance. Computer simulation still uses a mathematical model to recreate a real situation, but nowadays the 2D or 3D virtual environments or even virtual reality have been used for realistic illustration of the considered systems. This paper will focus on robotized manufacturing system and will present the one of possible approaches to the simulation of such systems. The selection of multi-agent approach is motivated by the flexibility of this solution that offers the modularity, robustness and autonomy.

  15. Accelerating the Design of Solar Thermal Fuel Materials through High Throughput Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Grossman, JC

    2014-12-01

    Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastablemore » structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.« less

  16. Detecting vapour bubbles in simulations of metastable water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    González, Miguel A.; Abascal, Jose L. F.; Valeriani, Chantal, E-mail: christoph.dellago@univie.ac.at, E-mail: cvaleriani@quim.ucm.es

    2014-11-14

    The investigation of cavitation in metastable liquids with molecular simulations requires an appropriate definition of the volume of the vapour bubble forming within the metastable liquid phase. Commonly used approaches for bubble detection exhibit two significant flaws: first, when applied to water they often identify the voids within the hydrogen bond network as bubbles thus masking the signature of emerging bubbles and, second, they lack thermodynamic consistency. Here, we present two grid-based methods, the M-method and the V-method, to detect bubbles in metastable water specifically designed to address these shortcomings. The M-method incorporates information about neighbouring grid cells to distinguishmore » between liquid- and vapour-like cells, which allows for a very sensitive detection of small bubbles and high spatial resolution of the detected bubbles. The V-method is calibrated such that its estimates for the bubble volume correspond to the average change in system volume and are thus thermodynamically consistent. Both methods are computationally inexpensive such that they can be used in molecular dynamics and Monte Carlo simulations of cavitation. We illustrate them by computing the free energy barrier and the size of the critical bubble for cavitation in water at negative pressure.« less

  17. Modelling cell motility and chemotaxis with evolving surface finite elements

    PubMed Central

    Elliott, Charles M.; Stinner, Björn; Venkataraman, Chandrasekhar

    2012-01-01

    We present a mathematical and a computational framework for the modelling of cell motility. The cell membrane is represented by an evolving surface, with the movement of the cell determined by the interaction of various forces that act normal to the surface. We consider external forces such as those that may arise owing to inhomogeneities in the medium and a pressure that constrains the enclosed volume, as well as internal forces that arise from the reaction of the cells' surface to stretching and bending. We also consider a protrusive force associated with a reaction–diffusion system (RDS) posed on the cell membrane, with cell polarization modelled by this surface RDS. The computational method is based on an evolving surface finite-element method. The general method can account for the large deformations that arise in cell motility and allows the simulation of cell migration in three dimensions. We illustrate applications of the proposed modelling framework and numerical method by reporting on numerical simulations of a model for eukaryotic chemotaxis and a model for the persistent movement of keratocytes in two and three space dimensions. Movies of the simulated cells can be obtained from http://homepages.warwick.ac.uk/∼maskae/CV_Warwick/Chemotaxis.html. PMID:22675164

  18. Pulsed field gradients in simulations of one- and two-dimensional NMR spectra.

    PubMed

    Meresi, G H; Cuperlovic, M; Palke, W E; Gerig, J T

    1999-03-01

    A method for the inclusion of the effects of z-axis pulsed field gradients in computer simulations of an arbitrary pulsed NMR experiment with spin (1/2) nuclei is described. Recognizing that the phase acquired by a coherence following the application of a z-axis pulsed field gradient bears a fixed relation to its order and the spatial position of the spins in the sample tube, the sample is regarded as a collection of volume elements, each phase-encoded by a characteristic, spatially dependent precession frequency. The evolution of the sample's density matrix is thus obtained by computing the evolution of the density matrix for each volume element. Following the last gradient pulse, these density matrices are combined to form a composite density matrix which evolves through the rest of the experiment to yield the observable signal. This approach is implemented in a program which includes capabilities for rigorous inclusion of spin relaxation by dipole-dipole, chemical shift anisotropy, and random field mechanisms, plus the effects of arbitrary RF fields. Mathematical procedures for accelerating these calculations are described. The approach is illustrated by simulations of representative one- and two-dimensional NMR experiments. Copyright 1999 Academic Press.

  19. Modeling RF-induced Plasma-Surface Interactions with VSim

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Smithe, David N.; Pankin, Alexei Y.; Roark, Christine M.; Stoltz, Peter H.; Zhou, Sean C.-D.; Kruger, Scott E.

    2014-10-01

    An overview of ongoing enhancements to the Plasma Discharge (PD) module of Tech-X's VSim software tool is presented. A sub-grid kinetic sheath model, developed for the accurate computation of sheath potentials near metal and dielectric-coated walls, enables the physical effects of DC and RF sheath dynamics to be included in macroscopic-scale plasma simulations that need not explicitly resolve sheath scale lengths. Sheath potential evolution, together with particle behavior near the sheath (e.g. sputtering), can thus be simulated in complex, experimentally relevant geometries. Simulations of RF sheath-enhanced impurity production near surfaces of the C-Mod field-aligned ICRF antenna are presented to illustrate the model; impurity mitigation techniques are also explored. Model extensions to capture the physics of secondary electron emission and of multispecies plasmas are summarized, together with a discussion of improved tools for plasma chemistry and IEDF/EEDF visualization and modeling. The latter tools are also highly relevant for commercial plasma processing applications. Ultimately, we aim to establish VSimPD as a robust, efficient computational tool for modeling fusion and industrial plasma processes. Supported by U.S. DoE SBIR Phase I/II Award DE-SC0009501.

  20. Gemini Rendezvous Docking Simulator

    NASA Image and Video Library

    1964-05-11

    Gemini Rendezvous Docking Simulator suspended from the roof of the Langley Research Center s aircraft hangar. Francis B. Smith wrote: The rendezvous and docking operation of the Gemini spacecraft with the Agena and of the Apollo Command Module with the Lunar Excursion Module have been the subject of simulator studies for several years. This figure illustrates the Gemini-Agena rendezvous docking simulator at Langley. The Gemini spacecraft was supported in a gimbal system by an overhead crane and gantry arrangement which provided 6 degrees of freedom - roll, pitch, yaw, and translation in any direction - all controllable by the astronaut in the spacecraft. Here again the controls fed into a computer which in turn provided an input to the servos driving the spacecraft so that it responded to control motions in a manner which accurately simulated the Gemini spacecraft. -- Published in Barton C. Hacker and James M. Grimwood, On the Shoulders of Titans: A History of Project Gemini, NASA SP-4203 Francis B. Smith, Simulators for Manned Space Research, Paper presented at the 1966 IEEE International convention, March 21-25, 1966.

  1. GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments.

    PubMed

    Monroy, Javier; Hernandez-Bennets, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier

    2017-06-23

    This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment.

  2. GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments

    PubMed Central

    Hernandez-Bennetts, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier

    2017-01-01

    This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment. PMID:28644375

  3. Design, Generation and Tooth Contact Analysis (TCA) of Asymmetric Face Gear Drive With Modified Geometry

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Fuentes, Alfonso; Hawkins, J. M.; Handschuh, Robert F.

    2001-01-01

    A new type of face gear drive for application in transmissions, particularly in helicopters, has been developed. The new geometry differs from the existing geometry by application of asymmetric profiles and double-crowned pinion of the face gear mesh. The paper describes the computerized design, simulation of meshing and contact, and stress analysis by finite element method. Special purpose computer codes have been developed to conduct the analysis. The analysis of this new type of face gear is illustrated with a numerical example.

  4. One-dimensional continuum electronic structure with the density-matrix renormalization group and its implications for density-functional theory.

    PubMed

    Stoudenmire, E M; Wagner, Lucas O; White, Steven R; Burke, Kieron

    2012-08-03

    We extend the density matrix renormalization group to compute exact ground states of continuum many-electron systems in one dimension with long-range interactions. We find the exact ground state of a chain of 100 strongly correlated artificial hydrogen atoms. The method can be used to simulate 1D cold atom systems and to study density-functional theory in an exact setting. To illustrate, we find an interacting, extended system which is an insulator but whose Kohn-Sham system is metallic.

  5. Relative entropy and optimization-driven coarse-graining methods in VOTCA

    DOE PAGES

    Mashayak, S. Y.; Jochum, Mara N.; Koschke, Konstantin; ...

    2015-07-20

    We discuss recent advances of the VOTCA package for systematic coarse-graining. Two methods have been implemented, namely the downhill simplex optimization and the relative entropy minimization. We illustrate the new methods by coarse-graining SPC/E bulk water and more complex water-methanol mixture systems. The CG potentials obtained from both methods are then evaluated by comparing the pair distributions from the coarse-grained to the reference atomistic simulations.We have also added a parallel analysis framework to improve the computational efficiency of the coarse-graining process.

  6. Enhancing Privacy in Participatory Sensing Applications with Multidimensional Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forrest, Stephanie; He, Wenbo; Groat, Michael

    2013-01-01

    Participatory sensing applications rely on individuals to share personal data to produce aggregated models and knowledge. In this setting, privacy concerns can discourage widespread adoption of new applications. We present a privacy-preserving participatory sensing scheme based on negative surveys for both continuous and multivariate categorical data. Without relying on encryption, our algorithms enhance the privacy of sensed data in an energy and computation efficient manner. Simulations and implementation on Android smart phones illustrate how multidimensional data can be aggregated in a useful and privacy-enhancing manner.

  7. Synthetic battery cycling techniques

    NASA Technical Reports Server (NTRS)

    Leibecki, H. F.; Thaller, L. H.

    1982-01-01

    Synthetic battery cycling makes use of the fast growing capability of computer graphics to illustrate some of the basic characteristics of operation of individual electrodes within an operating electrochemical cell. It can also simulate the operation of an entire string of cells that are used as the energy storage subsystem of a power system. The group of techniques that as a class have been referred to as Synthetic Battery Cycling is developed in part to try to bridge the gap of understanding that exists between single cell characteristics and battery system behavior.

  8. Communication: An exact bound on the bridge function in integral equation theories.

    PubMed

    Kast, Stefan M; Tomazic, Daniel

    2012-11-07

    We show that the formal solution of the general closure relation occurring in Ornstein-Zernike-type integral equation theories in terms of the Lambert W function leads to an exact relation between the bridge function and correlation functions, most notably to an inequality that bounds possible bridge values. The analytical results are illustrated on the example of the Lennard-Jones fluid for which the exact bridge function is known from computer simulations under various conditions. The inequality has consequences for the development of bridge function models and rationalizes numerical convergence issues.

  9. Implementing secure laptop-based testing in an undergraduate nursing program: a case study.

    PubMed

    Tao, Jinyuan; Lorentz, B Chris; Hawes, Stacey; Rugless, Fely; Preston, Janice

    2012-07-01

    This article presents the implementation of secure laptop-based testing in an undergraduate nursing program. Details on how to design, develop, implement, and secure tests are discussed. Laptop-based testing mode is also compared with the computer-laboratory-based testing model. Five elements of the laptop-based testing model are illustrated: (1) it simulates the national board examination, (2) security is achievable, (3) it is convenient for both instructors and students, (4) it provides students hands-on practice, (5) continuous technical support is the key.

  10. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    PubMed

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Middleware enabling computational self-reflection: exploring the need for and some costs of selfreflecting networks with application to homeland defense

    NASA Astrophysics Data System (ADS)

    Kramer, Michael J.; Bellman, Kirstie L.; Landauer, Christopher

    2002-07-01

    This paper will review and examine the definitions of Self-Reflection and Active Middleware. Then it will illustrate a conceptual framework for understanding and enumerating the costs of Self-Reflection and Active Middleware at increasing levels of Application. Then it will review some application of Self-Reflection and Active Middleware to simulations. Finally it will consider the application and additional kinds of costs applying Self-Reflection and Active Middleware to sharing information among the organizations expected to participate in Homeland Defense.

  12. State trajectories used to observe and control dc-to-dc converters

    NASA Technical Reports Server (NTRS)

    Burns, W. W., III; Wilson, T. G.

    1976-01-01

    State-plane analysis techniques are employed to study the voltage stepup energy-storage dc-to-dc converter. Within this framework, an example converter operating under the influence of a constant on-time and a constant frequency controller is examined. Qualitative insight gained through this approach is used to develop a conceptual free-running control law for the voltage stepup converter which can achieve steady-state operation in one on/off cycle of control. Digital computer simulation data are presented to illustrate and verify the theoretical discussions presented.

  13. Case studies in Bayesian microbial risk assessments.

    PubMed

    Kennedy, Marc C; Clough, Helen E; Turner, Joanne

    2009-12-21

    The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.

  14. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  15. Interactions among human behavior, social networks, and societal infrastructures: A Case Study in Computational Epidemiology

    NASA Astrophysics Data System (ADS)

    Barrett, Christopher L.; Bisset, Keith; Chen, Jiangzhuo; Eubank, Stephen; Lewis, Bryan; Kumar, V. S. Anil; Marathe, Madhav V.; Mortveit, Henning S.

    Human behavior, social networks, and the civil infrastructures are closely intertwined. Understanding their co-evolution is critical for designing public policies and decision support for disaster planning. For example, human behaviors and day to day activities of individuals create dense social interactions that are characteristic of modern urban societies. These dense social networks provide a perfect fabric for fast, uncontrolled disease propagation. Conversely, people’s behavior in response to public policies and their perception of how the crisis is unfolding as a result of disease outbreak can dramatically alter the normally stable social interactions. Effective planning and response strategies must take these complicated interactions into account. In this chapter, we describe a computer simulation based approach to study these issues using public health and computational epidemiology as an illustrative example. We also formulate game-theoretic and stochastic optimization problems that capture many of the problems that we study empirically.

  16. Two-dimensional finite-element analyses of simulated rotor-fragment impacts against rings and beams compared with experiments

    NASA Technical Reports Server (NTRS)

    Stagliano, T. R.; Witmer, E. A.; Rodal, J. J. A.

    1979-01-01

    Finite element modeling alternatives as well as the utility and limitations of the two dimensional structural response computer code CIVM-JET 4B for predicting the transient, large deflection, elastic plastic, structural responses of two dimensional beam and/or ring structures which are subjected to rigid fragment impact were investigated. The applicability of the CIVM-JET 4B analysis and code for the prediction of steel containment ring response to impact by complex deformable fragments from a trihub burst of a T58 turbine rotor was studied. Dimensional analysis considerations were used in a parametric examination of data from engine rotor burst containment experiments and data from sphere beam impact experiments. The use of the CIVM-JET 4B computer code for making parametric structural response studies on both fragment-containment structure and fragment-deflector structure was illustrated. Modifications to the analysis/computation procedure were developed to alleviate restrictions.

  17. Statistical Methodologies to Integrate Experimental and Computational Research

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  18. Combining configurational energies and forces for molecular force field optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  19. Combining configurational energies and forces for molecular force field optimization

    DOE PAGES

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    2017-07-21

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  20. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  1. Hybrid-Lambda: simulation of multiple merger and Kingman gene genealogies in species networks and species trees.

    PubMed

    Zhu, Sha; Degnan, James H; Goldstien, Sharyn J; Eldon, Bjarki

    2015-09-15

    There has been increasing interest in coalescent models which admit multiple mergers of ancestral lineages; and to model hybridization and coalescence simultaneously. Hybrid-Lambda is a software package that simulates gene genealogies under multiple merger and Kingman's coalescent processes within species networks or species trees. Hybrid-Lambda allows different coalescent processes to be specified for different populations, and allows for time to be converted between generations and coalescent units, by specifying a population size for each population. In addition, Hybrid-Lambda can generate simulated datasets, assuming the infinitely many sites mutation model, and compute the F ST statistic. As an illustration, we apply Hybrid-Lambda to infer the time of subdivision of certain marine invertebrates under different coalescent processes. Hybrid-Lambda makes it possible to investigate biogeographic concordance among high fecundity species exhibiting skewed offspring distribution.

  2. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamurthy, Dheepak

    This paper is an overview of Power System Simulation Toolbox (psst). psst is an open-source Python application for the simulation and analysis of power system models. psst simulates the wholesale market operation by solving a DC Optimal Power Flow (DCOPF), Security Constrained Unit Commitment (SCUC) and a Security Constrained Economic Dispatch (SCED). psst also includes models for the various entities in a power system such as Generator Companies (GenCos), Load Serving Entities (LSEs) and an Independent System Operator (ISO). psst features an open modular object oriented architecture that will make it useful for researchers to customize, expand, experiment beyond solvingmore » traditional problems. psst also includes a web based Graphical User Interface (GUI) that allows for user friendly interaction and for implementation on remote High Performance Computing (HPCs) clusters for parallelized operations. This paper also provides an illustrative application of psst and benchmarks with standard IEEE test cases to show the advanced features and the performance of toolbox.« less

  4. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.

  5. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.

  6. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  8. A procedure for automating CFD simulations of an inlet-bleed problem

    NASA Technical Reports Server (NTRS)

    Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.

    1995-01-01

    A procedure was developed to improve the turn-around time for computational fluid dynamics (CFD) simulations of an inlet-bleed problem involving oblique shock-wave/boundary-layer interactions on a flat plate with bleed into a plenum through one or more circular holes. This procedure is embodied in a preprocessor called AUTOMAT. With AUTOMAT, once data for the geometry and flow conditions have been specified (either interactively or via a namelist), it will automatically generate all input files needed to perform a three-dimensional Navier-Stokes simulation of the prescribed inlet-bleed problem by using the PEGASUS and OVERFLOW codes. The input files automatically generated by AUTOMAT include those for the grid system and those for the initial and boundary conditions. The grid systems automatically generated by AUTOMAT are multi-block structured grids of the overlapping type. Results obtained by using AUTOMAT are presented to illustrate its capability.

  9. Statistical methods and computing for big data.

    PubMed

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  10. Computing biological functions using BioΨ, a formal description of biological processes based on elementary bricks of actions

    PubMed Central

    Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck

    2010-01-01

    Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138

  11. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  12. Statistical methods and computing for big data

    PubMed Central

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  13. Efficient algorithms for the simulation of non-adiabatic electron transfer in complex molecular systems: application to DNA.

    PubMed

    Kubař, Tomáš; Elstner, Marcus

    2013-04-28

    In this work, a fragment-orbital density functional theory-based method is combined with two different non-adiabatic schemes for the propagation of the electronic degrees of freedom. This allows us to perform unbiased simulations of electron transfer processes in complex media, and the computational scheme is applied to the transfer of a hole in solvated DNA. It turns out that the mean-field approach, where the wave function of the hole is driven into a superposition of adiabatic states, leads to over-delocalization of the hole charge. This problem is avoided using a surface hopping scheme, resulting in a smaller rate of hole transfer. The method is highly efficient due to the on-the-fly computation of the coarse-grained DFT Hamiltonian for the nucleobases, which is coupled to the environment using a QM/MM approach. The computational efficiency and partial parallel character of the methodology make it possible to simulate electron transfer in systems of relevant biochemical size on a nanosecond time scale. Since standard non-polarizable force fields are applied in the molecular-mechanics part of the calculation, a simple scaling scheme was introduced into the electrostatic potential in order to simulate the effect of electronic polarization. It is shown that electronic polarization has an important effect on the features of charge transfer. The methodology is applied to two kinds of DNA sequences, illustrating the features of transfer along a flat energy landscape as well as over an energy barrier. The performance and relative merit of the mean-field scheme and the surface hopping for this application are discussed.

  14. Flocking and self-defense: experiments and simulations of avian mobbing

    NASA Astrophysics Data System (ADS)

    Kane, Suzanne Amador

    2011-03-01

    We have performed motion capture studies in the field of avian mobbing, in which flocks of prey birds harass predatory birds. Our empirical studies cover both field observations of mobbing occurring in mid-air, where both predator and prey are in flight, and an experimental system using actual prey birds and simulated predator ``perch and wait'' strategies. To model our results and establish the effectiveness of mobbing flight paths at minimizing risk of capture while optimizing predator harassment, we have performed computer simulations using the actual measured trajectories of mobbing prey birds combined with model predator trajectories. To accurately simulate predator motion, we also measured raptor acceleration and flight dynamics, well as prey-pursuit strategies. These experiments and theoretical studies were all performed with undergraduate research assistants in a liberal arts college setting. This work illustrates how biological physics provides undergraduate research projects well-suited to the abilities of physics majors with interdisciplinary science interests and diverse backgrounds.

  15. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model

    PubMed Central

    Saul, Katherine R.; Hu, Xiao; Goehler, Craig M.; Vidt, Meghan E.; Daly, Melissa; Velisar, Anca; Murray, Wendy M.

    2014-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410

  16. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.

    PubMed

    Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M

    2015-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.

  17. Functional Annotation of Ion Channel Structures by Molecular Simulation.

    PubMed

    Trick, Jemma L; Chelvaniththilan, Sivapalan; Klesse, Gianni; Aryal, Prafulla; Wallace, E Jayne; Tucker, Stephen J; Sansom, Mark S P

    2016-12-06

    Ion channels play key roles in cell membranes, and recent advances are yielding an increasing number of structures. However, their functional relevance is often unclear and better tools are required for their functional annotation. In sub-nanometer pores such as ion channels, hydrophobic gating has been shown to promote dewetting to produce a functionally closed (i.e., non-conductive) state. Using the serotonin receptor (5-HT 3 R) structure as an example, we demonstrate the use of molecular dynamics to aid the functional annotation of channel structures via simulation of the behavior of water within the pore. Three increasingly complex simulation analyses are described: water equilibrium densities; single-ion free-energy profiles; and computational electrophysiology. All three approaches correctly predict the 5-HT 3 R crystal structure to represent a functionally closed (i.e., non-conductive) state. We also illustrate the application of water equilibrium density simulations to annotate different conformational states of a glycine receptor. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Central Limit Theorem: New SOCR Applet and Demonstration Activity

    PubMed Central

    Dinov, Ivo D.; Christou, Nicolas; Sanchez, Juana

    2011-01-01

    Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multifaceted learning environments, which may facilitate student comprehension and information retention. In this manuscript, we describe one such innovative effort of using technological tools for improving student motivation and learning of the theory, practice and usability of the Central Limit Theorem (CLT) in probability and statistics courses. Our approach is based on harnessing the computational libraries developed by the Statistics Online Computational Resource (SOCR) to design a new interactive Java applet and a corresponding demonstration activity that illustrate the meaning and the power of the CLT. The CLT applet and activity have clear common goals; to provide graphical representation of the CLT, to improve student intuition, and to empirically validate and establish the limits of the CLT. The SOCR CLT activity consists of four experiments that demonstrate the assumptions, meaning and implications of the CLT and ties these to specific hands-on simulations. We include a number of examples illustrating the theory and applications of the CLT. Both the SOCR CLT applet and activity are freely available online to the community to test, validate and extend (Applet: http://www.socr.ucla.edu/htmls/SOCR_Experiments.html and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem). PMID:21833159

  19. Central Limit Theorem: New SOCR Applet and Demonstration Activity.

    PubMed

    Dinov, Ivo D; Christou, Nicolas; Sanchez, Juana

    2008-07-01

    Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multifaceted learning environments, which may facilitate student comprehension and information retention. In this manuscript, we describe one such innovative effort of using technological tools for improving student motivation and learning of the theory, practice and usability of the Central Limit Theorem (CLT) in probability and statistics courses. Our approach is based on harnessing the computational libraries developed by the Statistics Online Computational Resource (SOCR) to design a new interactive Java applet and a corresponding demonstration activity that illustrate the meaning and the power of the CLT. The CLT applet and activity have clear common goals; to provide graphical representation of the CLT, to improve student intuition, and to empirically validate and establish the limits of the CLT. The SOCR CLT activity consists of four experiments that demonstrate the assumptions, meaning and implications of the CLT and ties these to specific hands-on simulations. We include a number of examples illustrating the theory and applications of the CLT. Both the SOCR CLT applet and activity are freely available online to the community to test, validate and extend (Applet: http://www.socr.ucla.edu/htmls/SOCR_Experiments.html and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem).

  20. Bayesian methods for outliers detection in GNSS time series

    NASA Astrophysics Data System (ADS)

    Qianqian, Zhang; Qingming, Gui

    2013-07-01

    This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.

Top