Sample records for simulation enabling computational

  1. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  2. High performance hybrid functional Petri net simulations of biological pathway models on CUDA.

    PubMed

    Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.

  3. CUDA-based real time surgery simulation.

    PubMed

    Liu, Youquan; De, Suvranu

    2008-01-01

    In this paper we present a general software platform that enables real time surgery simulation on the newly available compute unified device architecture (CUDA)from NVIDIA. CUDA-enabled GPUs harness the power of 128 processors which allow data parallel computations. Compared to the previous GPGPU, it is significantly more flexible with a C language interface. We report implementation of both collision detection and consequent deformation computation algorithms. Our test results indicate that the CUDA enables a twenty times speedup for collision detection and about fifteen times speedup for deformation computation on an Intel Core 2 Quad 2.66 GHz machine with GeForce 8800 GTX.

  4. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  5. ESIM_DSN Web-Enabled Distributed Simulation Network

    NASA Technical Reports Server (NTRS)

    Bedrossian, Nazareth; Novotny, John

    2002-01-01

    In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.

  6. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  7. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  8. BeeSim: Leveraging Wearable Computers in Participatory Simulations with Young Children

    ERIC Educational Resources Information Center

    Peppler, Kylie; Danish, Joshua; Zaitlen, Benjamin; Glosson, Diane; Jacobs, Alexander; Phelps, David

    2010-01-01

    New technologies have enabled students to become active participants in computational simulations of dynamic and complex systems (called Participatory Simulations), providing a "first-person"perspective on complex systems. However, most existing Participatory Simulations have targeted older children, teens, and adults assuming that such concepts…

  9. A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software

    NASA Astrophysics Data System (ADS)

    Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.

    2017-10-01

    Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.

  10. AdapChem

    NASA Technical Reports Server (NTRS)

    Oluwole, Oluwayemisi O.; Wong, Hsi-Wu; Green, William

    2012-01-01

    AdapChem software enables high efficiency, low computational cost, and enhanced accuracy on computational fluid dynamics (CFD) numerical simulations used for combustion studies. The software dynamically allocates smaller, reduced chemical models instead of the larger, full chemistry models to evolve the calculation while ensuring the same accuracy to be obtained for steady-state CFD reacting flow simulations. The software enables detailed chemical kinetic modeling in combustion CFD simulations. AdapChem adapts the reaction mechanism used in the CFD to the local reaction conditions. Instead of a single, comprehensive reaction mechanism throughout the computation, a dynamic distribution of smaller, reduced models is used to capture accurately the chemical kinetics at a fraction of the cost of the traditional single-mechanism approach.

  11. Gaming via Computer Simulation Techniques for Junior College Economics Education. Final Report.

    ERIC Educational Resources Information Center

    Thompson, Fred A.

    A study designed to answer the need for more attractive and effective economics education involved the teaching of one junior college economics class by the conventional (lecture) method and an experimental class by computer simulation techniques. Econometric models approximating the "real world" were computer programed to enable the experimental…

  12. Computer-Aided Engineering Tools | Water Power | NREL

    Science.gov Websites

    energy converters that will provide a full range of simulation capabilities for single devices and arrays simulation of water power technologies on high-performance computers enables the study of complex systems and experimentation. Such simulation is critical to accelerate progress in energy programs within the U.S. Department

  13. Development of a Web Based Simulating System for Earthquake Modeling on the Grid

    NASA Astrophysics Data System (ADS)

    Seber, D.; Youn, C.; Kaiser, T.

    2007-12-01

    Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.

  14. Look-ahead Dynamic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.

  15. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  16. Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop: August 4-5, 2015, Washington, D.C.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.; Boldyrev, Stanislav; Fischer, Paul

    This report details the impact exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the DOE applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought together experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.

  17. Development of three-dimensional patient face model that enables real-time collision detection and cutting operation for a dental simulator.

    PubMed

    Yamaguchi, Satoshi; Yamada, Yuya; Yoshida, Yoshinori; Noborio, Hiroshi; Imazato, Satoshi

    2012-01-01

    The virtual reality (VR) simulator is a useful tool to develop dental hand skill. However, VR simulations with reactions of patients have limited computational time to reproduce a face model. Our aim was to develop a patient face model that enables real-time collision detection and cutting operation by using stereolithography (STL) and deterministic finite automaton (DFA) data files. We evaluated dependence of computational cost and constructed the patient face model using the optimum condition for combining STL and DFA data files, and assessed the computational costs for operation in do-nothing, collision, cutting, and combination of collision and cutting. The face model was successfully constructed with low computational costs of 11.3, 18.3, 30.3, and 33.5 ms for do-nothing, collision, cutting, and collision and cutting, respectively. The patient face model could be useful for developing dental hand skill with VR.

  18. XNsim: Internet-Enabled Collaborative Distributed Simulation via an Extensible Network

    NASA Technical Reports Server (NTRS)

    Novotny, John; Karpov, Igor; Zhang, Chendi; Bedrossian, Nazareth S.

    2007-01-01

    In this paper, the XNsim approach to achieve Internet-enabled, dynamically scalable collaborative distributed simulation capabilities is presented. With this approach, a complete simulation can be assembled from shared component subsystems written in different formats, that run on different computing platforms, with different sampling rates, in different geographic locations, and over singlelmultiple networks. The subsystems interact securely with each other via the Internet. Furthermore, the simulation topology can be dynamically modified. The distributed simulation uses a combination of hub-and-spoke and peer-topeer network topology. A proof-of-concept demonstrator is also presented. The XNsim demonstrator can be accessed at http://www.jsc.draver.corn/xn that hosts various examples of Internet enabled simulations.

  19. FermiLib v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCCLEAN, JARROD; HANER, THOMAS; STEIGER, DAMIAN

    FermiLib is an open source software package designed to facilitate the development and testing of algorithms for simulations of fermionic systems on quantum computers. Fermionic simulations represent an important application of early quantum devices with a lot of potential high value targets, such as quantum chemistry for the development of new catalysts. This software strives to provide a link between the required domain expertise in specific fermionic applications and quantum computing to enable more users to directly interface with, and develop for, these applications. It is an extensible Python library designed to interface with the high performance quantum simulator, ProjectQ,more » as well as application specific software such as PSI4 from the domain of quantum chemistry. Such software is key to enabling effective user facilities in quantum computation research.« less

  20. Subway Environmental Design Handbook, Volume II, Subway Environment Simulation Computer Program, Version 4, Part 1, User's Manual

    DOT National Transportation Integrated Search

    1975-10-01

    This document forms part of the Subway Environmental Design Handbook. It contains the background information and instructions to enable an engineer to perform an analysis of a subway system by using the Subway Environment Simulation (SES) computer pr...

  1. Draper Station Analysis Tool

    NASA Technical Reports Server (NTRS)

    Bedrossian, Nazareth; Jang, Jiann-Woei; McCants, Edward; Omohundro, Zachary; Ring, Tom; Templeton, Jeremy; Zoss, Jeremy; Wallace, Jonathan; Ziegler, Philip

    2011-01-01

    Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems.

  2. The Use of Computer-Based Simulation to Aid Comprehension and Incidental Vocabulary Learning

    ERIC Educational Resources Information Center

    Mohsen, Mohammed Ali

    2016-01-01

    One of the main issues in language learning is to find ways to enable learners to interact with the language input in an involved task. Given that computer-based simulation allows learners to interact with visual modes, this article examines how the interaction of students with an online video simulation affects their second language video…

  3. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less

  4. Supercomputers ready for use as discovery machines for neuroscience.

    PubMed

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  5. Supercomputers Ready for Use as Discovery Machines for Neuroscience

    PubMed Central

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998

  6. Simulating Human Cognition in the Domain of Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Johnston, James C.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Experiments intended to assess performance in human-machine interactions are often prohibitively expensive, unethical or otherwise impractical to run. Approximations of experimental results can be obtained, in principle, by simulating the behavior of subjects using computer models of human mental behavior. Computer simulation technology has been developed for this purpose. Our goal is to produce a cognitive model suitable to guide the simulation machinery and enable it to closely approximate a human subject's performance in experimental conditions. The described model is designed to simulate a variety of cognitive behaviors involved in routine air traffic control. As the model is elaborated, our ability to predict the effects of novel circumstances on controller error rates and other performance characteristics should increase. This will enable the system to project the impact of proposed changes to air traffic control procedures and equipment on controller performance.

  7. 2000 Numerical Propulsion System Simulation Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Greg; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac

    2001-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective. high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA'S Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 1999 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2000 by the High Performance Computing and Communications Program.

  8. 2001 Numerical Propulsion System Simulation Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Gregory; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac

    2002-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA's Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 2000 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2001 by the High Performance Computing and Communications Program.

  9. Tissue-scale, personalized modeling and simulation of prostate cancer growth

    NASA Astrophysics Data System (ADS)

    Lorenzo, Guillermo; Scott, Michael A.; Tew, Kevin; Hughes, Thomas J. R.; Zhang, Yongjie Jessica; Liu, Lei; Vilanova, Guillermo; Gomez, Hector

    2016-11-01

    Recently, mathematical modeling and simulation of diseases and their treatments have enabled the prediction of clinical outcomes and the design of optimal therapies on a personalized (i.e., patient-specific) basis. This new trend in medical research has been termed “predictive medicine.” Prostate cancer (PCa) is a major health problem and an ideal candidate to explore tissue-scale, personalized modeling of cancer growth for two main reasons: First, it is a small organ, and, second, tumor growth can be estimated by measuring serum prostate-specific antigen (PSA, a PCa biomarker in blood), which may enable in vivo validation. In this paper, we present a simple continuous model that reproduces the growth patterns of PCa. We use the phase-field method to account for the transformation of healthy cells to cancer cells and use diffusion-reaction equations to compute nutrient consumption and PSA production. To accurately and efficiently compute tumor growth, our simulations leverage isogeometric analysis (IGA). Our model is shown to reproduce a known shape instability from a spheroidal pattern to fingered growth. Results of our computations indicate that such shift is a tumor response to escape starvation, hypoxia, and, eventually, necrosis. Thus, branching enables the tumor to minimize the distance from inner cells to external nutrients, contributing to cancer survival and further development. We have also used our model to perform tissue-scale, personalized simulation of a PCa patient, based on prostatic anatomy extracted from computed tomography images. This simulation shows tumor progression similar to that seen in clinical practice.

  10. Computational Nanotechnology of Molecular Materials, Electronics, and Actuators with Carbon Nanotubes and Fullerenes

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Menon, Madhu; Cho, Kyeongjae; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The role of computational nanotechnology in developing next generation of multifunctional materials, molecular scale electronic and computing devices, sensors, actuators, and machines is described through a brief review of enabling computational techniques and few recent examples derived from computer simulations of carbon nanotube based molecular nanotechnology.

  11. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    PubMed Central

    Xia, Yong; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957

  12. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.

    PubMed

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  13. Numerical Propulsion System Simulation (NPSS) 1999 Industry Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Greg; Naiman, Cynthia; Evans, Austin

    2000-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. In addition, the paper contains a summary of the feedback received from industry partners in the development effort and the actions taken over the past year to respond to that feedback. The NPSS development was supported in FY99 by the High Performance Computing and Communications Program.

  14. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diachin, L F; Garaizar, F X; Henson, V E

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less

  15. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Wall Shear Stress Distribution in a Patient-Specific Cerebral Aneurysm Model using Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya

    2016-11-01

    We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.

  17. Delivering better power: the role of simulation in reducing the environmental impact of aircraft engines.

    PubMed

    Menzies, Kevin

    2014-08-13

    The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  18. Software for simulation of a computed tomography imaging spectrometer using optical design software

    NASA Astrophysics Data System (ADS)

    Spuhler, Peter T.; Willer, Mark R.; Volin, Curtis E.; Descour, Michael R.; Dereniak, Eustace L.

    2000-11-01

    Our Imaging Spectrometer Simulation Software known under the name Eikon should improve and speed up the design of a Computed Tomography Imaging Spectrometer (CTIS). Eikon uses existing raytracing software to simulate a virtual instrument. Eikon enables designers to virtually run through the design, calibration and data acquisition, saving significant cost and time when designing an instrument. We anticipate that Eikon simulations will improve future designs of CTIS by allowing engineers to explore more instrument options.

  19. X-ray Micro-Tomography of Ablative Heat Shield Materials

    NASA Technical Reports Server (NTRS)

    Panerai, Francesco; Ferguson, Joseph; Borner, Arnaud; Mansour, Nagi N.; Barnard, Harold S.; MacDowell, Alastair A.; Parkinson, Dilworth Y.

    2016-01-01

    X-ray micro-tomography is a non-destructive characterization technique that allows imaging of materials structures with voxel sizes in the micrometer range. This level of resolution makes the technique very attractive for imaging porous ablators used in hypersonic entry systems. Besides providing a high fidelity description of the material architecture, micro-tomography enables computations of bulk material properties and simulations of micro-scale phenomena. This presentation provides an overview of a collaborative effort between NASA Ames Research Center and Lawrence Berkeley National Laboratory, aimed at developing micro-tomography experiments and simulations for porous ablative materials. Measurements are carried using x-rays from the Advanced Light Source at Berkeley Lab on different classes of ablative materials used in NASA entry systems. Challenges, strengths and limitations of the technique for imaging materials such as lightweight carbon-phenolic systems and woven textiles are discussed. Computational tools developed to perform numerical simulations based on micro-tomography are described. These enable computations of material properties such as permeability, thermal and radiative conductivity, tortuosity and other parameters that are used in ablator response models. Finally, we present the design of environmental cells that enable imaging materials under simulated operational conditions, such as high temperature, mechanical loads and oxidizing atmospheres.Keywords: Micro-tomography, Porous media, Ablation

  20. Multicore Education through Simulation

    ERIC Educational Resources Information Center

    Ozturk, O.

    2011-01-01

    A project-oriented course for advanced undergraduate and graduate students is described for simulating multiple processor cores. Simics, a free simulator for academia, was utilized to enable students to explore computer architecture, operating systems, and hardware/software cosimulation. Motivation for including this course in the curriculum is…

  1. Simulation of particle motion in a closed conduit validated against experimental data

    NASA Astrophysics Data System (ADS)

    Dolanský, Jindřich

    2015-05-01

    Motion of a number of spherical particles in a closed conduit is examined by means of both simulation and experiment. The bed of the conduit is covered by stationary spherical particles of the size of the moving particles. The flow is driven by experimentally measured velocity profiles which are inputs of the simulation. Altering input velocity profiles generates various trajectory patterns. The lattice Boltzmann method (LBM) based simulation is developed to study mutual interactions of the flow and the particles. The simulation enables to model both the particle motion and the fluid flow. The entropic LBM is employed to deal with the flow characterized by the high Reynolds number. The entropic modification of the LBM along with the enhanced refinement of the lattice grid yield an increase in demands on computational resources. Due to the inherently parallel nature of the LBM it can be handled by employing the Parallel Computing Toolbox (MATLAB) and other transformations enabling usage of the CUDA GPU computing technology. The trajectories of the particles determined within the LBM simulation are validated against data gained from the experiments. The compatibility of the simulation results with the outputs of experimental measurements is evaluated. The accuracy of the applied approach is assessed and stability and efficiency of the simulation is also considered.

  2. The fusion code XGC: Enabling kinetic study of multi-scale edge turbulent transport in ITER [Book Chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less

  3. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  4. Immersive, Interactive, Web-Enabled Computer Simulation as a Trigger for Learning: The next Generation of Problem-Based Learning in Educational Leadership

    ERIC Educational Resources Information Center

    Mann, Dale; Reardon, R. M.; Becker, J. D.; Shakeshaft, C.; Bacon, Nicholas

    2011-01-01

    This paper describes the use of advanced computer technology in an innovative educational leadership program. This program integrates full-motion video scenarios that simulate the leadership challenges typically faced by principals over the course of a full school year. These scenarios require decisions that are then coupled to consequences and…

  5. Enabling Co-Design of Multi-Layer Exascale Storage Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carothers, Christopher

    Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less

  6. Virtual School, Real Experience: Simulations Replicate the World of Practice for Aspiring Principals

    ERIC Educational Resources Information Center

    Mann, Dale; Shakeshaft, Charol

    2013-01-01

    A web-enabled computer simulation program presents real-world opportunities, problems, and challenges for aspiring principals. The simulation challenges areas that are not always covered in lectures, textbooks, or workshops. For example, using the simulation requires dealing--on-screen and in real time--with demanding parents, observing…

  7. Moose: An Open-Source Framework to Enable Rapid Development of Collaborative, Multi-Scale, Multi-Physics Simulation Tools

    NASA Astrophysics Data System (ADS)

    Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.

    2014-12-01

    The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.

  8. Progress in modeling and simulation.

    PubMed

    Kindler, E

    1998-01-01

    For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.

  9. DEM GPU studies of industrial scale particle simulations for granular flow civil engineering applications

    NASA Astrophysics Data System (ADS)

    Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine

    2017-06-01

    The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.

  10. Outcomes from the DOE Workshop on Turbulent Flow Simulation at the Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael; Boldyrev, Stanislav; Chang, Choong-Seock

    This paper summarizes the outcomes from the Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop, which was held 4-5 August 2015, and was sponsored by the U.S. Department of Energy Office of Advanced Scientific Computing Research. The workshop objective was to define and describe the challenges and opportunities that computing at the exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the U.S. Department of Energy applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought togethermore » experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.« less

  11. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  12. QwikMD — Integrative Molecular Dynamics Toolkit for Novices and Experts

    PubMed Central

    Ribeiro, João V.; Bernardi, Rafael C.; Rudack, Till; Stone, John E.; Phillips, James C.; Freddolino, Peter L.; Schulten, Klaus

    2016-01-01

    The proper functioning of biomolecules in living cells requires them to assume particular structures and to undergo conformational changes. Both biomolecular structure and motion can be studied using a wide variety of techniques, but none offers the level of detail as do molecular dynamics (MD) simulations. Integrating two widely used modeling programs, namely NAMD and VMD, we have created a robust, user-friendly software, QwikMD, which enables novices and experts alike to address biomedically relevant questions, where often only molecular dynamics simulations can provide answers. Performing both simple and advanced MD simulations interactively, QwikMD automates as many steps as necessary for preparing, carrying out, and analyzing simulations while checking for common errors and enabling reproducibility. QwikMD meets also the needs of experts in the field, increasing the efficiency and quality of their work by carrying out tedious or repetitive tasks while enabling easy control of every step. Whether carrying out simulations within the live view mode on a small laptop or performing complex and large simulations on supercomputers or Cloud computers, QwikMD uses the same steps and user interface. QwikMD is freely available by download on group and personal computers. It is also available on the cloud at Amazon Web Services. PMID:27216779

  13. QwikMD — Integrative Molecular Dynamics Toolkit for Novices and Experts

    NASA Astrophysics Data System (ADS)

    Ribeiro, João V.; Bernardi, Rafael C.; Rudack, Till; Stone, John E.; Phillips, James C.; Freddolino, Peter L.; Schulten, Klaus

    2016-05-01

    The proper functioning of biomolecules in living cells requires them to assume particular structures and to undergo conformational changes. Both biomolecular structure and motion can be studied using a wide variety of techniques, but none offers the level of detail as do molecular dynamics (MD) simulations. Integrating two widely used modeling programs, namely NAMD and VMD, we have created a robust, user-friendly software, QwikMD, which enables novices and experts alike to address biomedically relevant questions, where often only molecular dynamics simulations can provide answers. Performing both simple and advanced MD simulations interactively, QwikMD automates as many steps as necessary for preparing, carrying out, and analyzing simulations while checking for common errors and enabling reproducibility. QwikMD meets also the needs of experts in the field, increasing the efficiency and quality of their work by carrying out tedious or repetitive tasks while enabling easy control of every step. Whether carrying out simulations within the live view mode on a small laptop or performing complex and large simulations on supercomputers or Cloud computers, QwikMD uses the same steps and user interface. QwikMD is freely available by download on group and personal computers. It is also available on the cloud at Amazon Web Services.

  14. Simulation-Based Approach for Site-Specific Optimization of Hydrokinetic Turbine Arrays

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Chawdhary, S.; Yang, X.; Khosronejad, A.; Angelidis, D.

    2014-12-01

    A simulation-based approach has been developed to enable site-specific optimization of tidal and current turbine arrays in real-life waterways. The computational code is based on the St. Anthony Falls Laboratory Virtual StreamLab (VSL3D), which is able to carry out high-fidelity simulations of turbulent flow and sediment transport processes in rivers and streams taking into account the arbitrary geometrical complexity characterizing natural waterways. The computational framework can be used either in turbine-resolving mode, to take into account all geometrical details of the turbine, or with the turbines parameterized as actuator disks or actuator lines. Locally refined grids are employed to dramatically increase the resolution of the simulation and enable efficient simulations of multi-turbine arrays. Turbine/sediment interactions are simulated using the coupled hydro-morphodynamic module of VSL3D. The predictive capabilities of the resulting computational framework will be demonstrated by applying it to simulate turbulent flow past a tri-frame configuration of hydrokinetic turbines in a rigid-bed turbulent open channel flow as well as turbines mounted on mobile bed open channels to investigate turbine/sediment interactions. The utility of the simulation-based approach for guiding the optimal development of turbine arrays in real-life waterways will also be discussed and demonstrated. This work was supported by NSF grant IIP-1318201. Simulations were carried out at the Minnesota Supercomputing Institute.

  15. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  16. Integrating Computational Science Tools into a Thermodynamics Course

    ERIC Educational Resources Information Center

    Vieira, Camilo; Magana, Alejandra J.; García, R. Edwin; Jana, Aniruddha; Krafcik, Matthew

    2018-01-01

    Computational tools and methods have permeated multiple science and engineering disciplines, because they enable scientists and engineers to process large amounts of data, represent abstract phenomena, and to model and simulate complex concepts. In order to prepare future engineers with the ability to use computational tools in the context of…

  17. Advanced computational simulations of water waves interacting with wave energy converters

    NASA Astrophysics Data System (ADS)

    Pathak, Ashish; Freniere, Cole; Raessi, Mehdi

    2017-03-01

    Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.

  18. Computer simulation of magnetization-controlled shunt reactors for calculating electromagnetic transients in power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpov, A. S.

    2013-01-15

    A computer procedure for simulating magnetization-controlled dc shunt reactors is described, which enables the electromagnetic transients in electric power systems to be calculated. It is shown that, by taking technically simple measures in the control system, one can obtain high-speed reactors sufficient for many purposes, and dispense with the use of high-power devices for compensating higher harmonic components.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  20. Universal, computer facilitated, steady state oscillator, closed loop analysis theory and some applications to precision oscillators

    NASA Technical Reports Server (NTRS)

    Parzen, Benjamin

    1992-01-01

    The theory of oscillator analysis in the immittance domain should be read in conjunction with the additional theory presented here. The combined theory enables the computer simulation of the steady state oscillator. The simulation makes the calculation of the oscillator total steady state performance practical, including noise at all oscillator locations. Some specific precision oscillators are analyzed.

  1. Understanding Islamist political violence through computational social simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G

    Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less

  2. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    PubMed Central

    Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.

    2016-01-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676

  3. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    DOE PAGES

    Drawert, Brian; Hellander, Andreas; Bales, Ben; ...

    2016-12-08

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less

  4. Enhanced Master Controller Unit Tester

    NASA Technical Reports Server (NTRS)

    Benson, Patricia; Johnson, Yvette; Johnson, Brian; Williams, Philip; Burton, Geoffrey; McCoy, Anthony

    2007-01-01

    The Enhanced Master Controller Unit Tester (EMUT) software is a tool for development and testing of software for a master controller (MC) flight computer. The primary function of the EMUT software is to simulate interfaces between the MC computer and external analog and digital circuitry (including other computers) in a rack of equipment to be used in scientific experiments. The simulations span the range of nominal, off-nominal, and erroneous operational conditions, enabling the testing of MC software before all the equipment becomes available.

  5. Cosmological N-body Simulation

    NASA Astrophysics Data System (ADS)

    Lake, George

    1994-05-01

    .90ex> }}} The ``N'' in N-body calculations has doubled every year for the last two decades. To continue this trend, the UW N-body group is working on algorithms for the fast evaluation of gravitational forces on parallel computers and establishing rigorous standards for the computations. In these algorithms, the computational cost per time step is ~ 10(3) pairwise forces per particle. A new adaptive time integrator enables us to perform high quality integrations that are fully temporally and spatially adaptive. SPH--smoothed particle hydrodynamics will be added to simulate the effects of dissipating gas and magnetic fields. The importance of these calculations is two-fold. First, they determine the nonlinear consequences of theories for the structure of the Universe. Second, they are essential for the interpretation of observations. Every galaxy has six coordinates of velocity and position. Observations determine two sky coordinates and a line of sight velocity that bundles universal expansion (distance) together with a random velocity created by the mass distribution. Simulations are needed to determine the underlying structure and masses. The importance of simulations has moved from ex post facto explanation to an integral part of planning large observational programs. I will show why high quality simulations with ``large N'' are essential to accomplish our scientific goals. This year, our simulations have N >~ 10(7) . This is sufficient to tackle some niche problems, but well short of our 5 year goal--simulating The Sloan Digital Sky Survey using a few Billion particles (a Teraflop-year simulation). Extrapolating past trends, we would have to ``wait'' 7 years for this hundred-fold improvement. Like past gains, significant changes in the computational methods are required for these advances. I will describe new algorithms, algorithmic hacks and a dedicated computer to perform Billion particle simulations. Finally, I will describe research that can be enabled by Petaflop computers. This research is supported by the NASA HPCC/ESS program.

  6. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  7. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Hendrickson, Bruce

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.« less

  9. Fast Learning for Immersive Engagement in Energy Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M

    The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less

  10. Elucidating Reaction Mechanisms on Quantum Computers

    NASA Astrophysics Data System (ADS)

    Wiebe, Nathan; Reiher, Markus; Svore, Krysta; Wecker, Dave; Troyer, Matthias

    We show how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.

  11. Design Analysis Kit for Optimization and Terascale Applications 6.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-19

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to: (1) enhance understanding of risk, (2) improve products, and (3) assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a computational model. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, risk analysis, and quantification of margins and uncertainty with such models. It directly supports verificationmore » and validation activities. The algorithms implemented in Dakota aim to address challenges in performing these analyses with complex science and engineering models from desktop to high performance computers.« less

  12. Capabilities and applications of the Program to Optimize Simulated Trajectories (POST). Program summary document

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Stevenson, R.

    1977-01-01

    The capabilities and applications of the three-degree-of-freedom (3DOF) version and the six-degree-of-freedom (6DOF) version of the Program to Optimize Simulated Trajectories (POST) are summarized. The document supplements the detailed program manuals by providing additional information that motivates and clarifies basic capabilities, input procedures, applications and computer requirements of these programs. The information will enable prospective users to evaluate the programs, and to determine if they are applicable to their problems. Enough information is given to enable managerial personnel to evaluate the capabilities of the programs and describes the POST structure, formulation, input and output procedures, sample cases, and computer requirements. The report also provides answers to basic questions concerning planet and vehicle modeling, simulation accuracy, optimization capabilities, and general input rules. Several sample cases are presented.

  13. Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug; Johnson, Keith

    1989-01-01

    The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.

  14. Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, Naresh; Baone, Chaitanya; Veda, Santosh

    2014-12-31

    Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less

  15. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    NASA Astrophysics Data System (ADS)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  16. Data flow modeling techniques

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  17. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  18. USE OF COMPUTER-AIDED PROCESS ENGINEERING TOOL IN POLLUTION PREVENTION

    EPA Science Inventory

    Computer-Aided Process Engineering has become established in industry as a design tool. With the establishment of the CAPE-OPEN software specifications for process simulation environments. CAPE-OPEN provides a set of "middleware" standards that enable software developers to acces...

  19. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    PubMed

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.

  20. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  1. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE PAGES

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...

    2017-04-24

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  2. TOPICAL REVIEW: Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.; Chan, V. S.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  3. Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  4. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    PubMed

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-06-01

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Computer Simulation For Design Of TWT's

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1992-01-01

    A three-dimensional finite-element analytical technique facilitates design and fabrication of traveling-wave-tube (TWT) slow-wave structures. Used to perform thermal and mechanical analyses of TWT designed with variety of configurations, geometries, and materials. Using three-dimensional computer analysis, designer able to simulate building and testing of TWT, with consequent substantial saving of time and money. Technique enables detailed look into operation of traveling-wave tubes to help improve performance for future communications systems.

  6. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  7. Software Framework for Advanced Power Plant Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Widmann; Sorin Munteanu; Aseem Jain

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less

  8. SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhihong

    Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less

  9. Solving Equations of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Lim, Christopher

    2007-01-01

    Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.

  10. Enabling Grid Computing resources within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  11. High performance computing in biology: multimillion atom simulations of nanoscale systems

    PubMed Central

    Sanbonmatsu, K. Y.; Tung, C.-S.

    2007-01-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988

  12. Development of Multimedia Computer Applications for Clinical Pharmacy Training.

    ERIC Educational Resources Information Center

    Schlict, John R.; Livengood, Bruce; Shepherd, John

    1997-01-01

    Computer simulations in clinical pharmacy education help expose students to clinical patient management earlier and enable training of large numbers of students outside conventional clinical practice sites. Multimedia instruction and its application to pharmacy training are described, the general process for developing multimedia presentations is…

  13. Combining computational models, semantic annotations and simulation experiments in a graph database

    PubMed Central

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863

  14. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  15. LibKiSAO: a Java library for Querying KiSAO.

    PubMed

    Zhukova, Anna; Adams, Richard; Laibe, Camille; Le Novère, Nicolas

    2012-09-24

    The Kinetic Simulation Algorithm Ontology (KiSAO) supplies information about existing algorithms available for the simulation of Systems Biology models, their characteristics, parameters and inter-relationships. KiSAO enables the unambiguous identification of algorithms from simulation descriptions. Information about analogous methods having similar characteristics and about algorithm parameters incorporated into KiSAO is desirable for simulation tools. To retrieve this information programmatically an application programming interface (API) for KiSAO is needed. We developed libKiSAO, a Java library to enable querying of the KiSA Ontology. It implements methods to retrieve information about simulation algorithms stored in KiSAO, their characteristics and parameters, and methods to query the algorithm hierarchy and search for similar algorithms providing comparable results for the same simulation set-up. Using libKiSAO, simulation tools can make logical inferences based on this knowledge and choose the most appropriate algorithm to perform a simulation. LibKiSAO also enables simulation tools to handle a wider range of simulation descriptions by determining which of the available methods are similar and can be used instead of the one indicated in the simulation description if that one is not implemented. LibKiSAO enables Java applications to easily access information about simulation algorithms, their characteristics and parameters stored in the OWL-encoded Kinetic Simulation Algorithm Ontology. LibKiSAO can be used by simulation description editors and simulation tools to improve reproducibility of computational simulation tasks and facilitate model re-use.

  16. Parallel stochastic simulation of macroscopic calcium currents.

    PubMed

    González-Vélez, Virginia; González-Vélez, Horacio

    2007-06-01

    This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.

  17. Virtual reality aided visualization of fluid flow simulations with application in medical education and diagnostics.

    PubMed

    Djukic, Tijana; Mandic, Vesna; Filipovic, Nenad

    2013-12-01

    Medical education, training and preoperative diagnostics can be drastically improved with advanced technologies, such as virtual reality. The method proposed in this paper enables medical doctors and students to visualize and manipulate three-dimensional models created from CT or MRI scans, and also to analyze the results of fluid flow simulations. Simulation of fluid flow using the finite element method is performed, in order to compute the shear stress on the artery walls. The simulation of motion through the artery is also enabled. The virtual reality system proposed here could shorten the length of training programs and make the education process more effective. © 2013 Published by Elsevier Ltd.

  18. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  19. SaaS enabled admission control for MCMC simulation in cloud computing infrastructures

    NASA Astrophysics Data System (ADS)

    Vázquez-Poletti, J. L.; Moreno-Vozmediano, R.; Han, R.; Wang, W.; Llorente, I. M.

    2017-02-01

    Markov Chain Monte Carlo (MCMC) methods are widely used in the field of simulation and modelling of materials, producing applications that require a great amount of computational resources. Cloud computing represents a seamless source for these resources in the form of HPC. However, resource over-consumption can be an important drawback, specially if the cloud provision process is not appropriately optimized. In the present contribution we propose a two-level solution that, on one hand, takes advantage of approximate computing for reducing the resource demand and on the other, uses admission control policies for guaranteeing an optimal provision to running applications.

  20. Comparative Analysis of Palm and Wearable Computers for Participatory Simulations

    ERIC Educational Resources Information Center

    Klopfer, Eric; Yoon, Susan; Rivas, Luz

    2004-01-01

    Recent educational computer-based technologies have offered promising lines of research that promote social constructivist learning goals, develop skills required to operate in a knowledge-based economy (Roschelle et al. 2000), and enable more authentic science-like problem-solving. In our research programme, we have been interested in combining…

  1. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Pernice

    2010-09-01

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  2. Grid Computing: Topology-Aware, Peer-to-Peer, Power-Aware, and Embedded Web Services

    DTIC Science & Technology

    2003-09-22

    Dist Simulation • Time Management enables temporal causality to be enforced in Distributed Simulations • Typically enforced via a Lower Bound Time...algorithm • Distinguished Root Node Algorithm developed as a topology-aware time management service – Relies on a tree from end-hosts to a

  3. Identifying Secondary-School Students' Difficulties When Reading Visual Representations Displayed in Physics Simulations

    ERIC Educational Resources Information Center

    López, Víctor; Pintó, Roser

    2017-01-01

    Computer simulations are often considered effective educational tools, since their visual and communicative power enable students to better understand physical systems and phenomena. However, previous studies have found that when students read visual representations some reading difficulties can arise, especially when these are complex or dynamic…

  4. Integrating Blended Teaching and Learning to Enhance Graduate Attributes

    ERIC Educational Resources Information Center

    Hermens, Antoine; Clarke, Elizabeth

    2009-01-01

    Purpose: The purpose of this paper is to explore the role of computer based business simulations in higher education as innovative tools of teaching and learning to enhance students' practical understanding of real business problems. Whether the integration of business simulation technologies will enable significant innovation in teaching and…

  5. Integrating Computational Science Tools into a Thermodynamics Course

    NASA Astrophysics Data System (ADS)

    Vieira, Camilo; Magana, Alejandra J.; García, R. Edwin; Jana, Aniruddha; Krafcik, Matthew

    2018-01-01

    Computational tools and methods have permeated multiple science and engineering disciplines, because they enable scientists and engineers to process large amounts of data, represent abstract phenomena, and to model and simulate complex concepts. In order to prepare future engineers with the ability to use computational tools in the context of their disciplines, some universities have started to integrate these tools within core courses. This paper evaluates the effect of introducing three computational modules within a thermodynamics course on student disciplinary learning and self-beliefs about computation. The results suggest that using worked examples paired to computer simulations to implement these modules have a positive effect on (1) student disciplinary learning, (2) student perceived ability to do scientific computing, and (3) student perceived ability to do computer programming. These effects were identified regardless of the students' prior experiences with computer programming.

  6. Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.

    PubMed

    Parton, Daniel L; Grinaway, Patrick B; Hanson, Sonya M; Beauchamp, Kyle A; Chodera, John D

    2016-06-01

    The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs)-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.

  7. The very local Hubble flow: Computer simulations of dynamical history

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Karachentsev, I. D.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Makarov, D. I.

    2004-02-01

    The phenomenon of the very local (≤3 Mpc) Hubble flow is studied on the basis of the data of recent precision observations. A set of computer simulations is performed to trace the trajectories of the flow galaxies back in time to the epoch of the formation of the Local Group. It is found that the ``initial conditions'' of the flow are drastically different from the linear velocity-distance relation. The simulations enable one also to recognize the major trends of the flow evolution and identify the dynamical role of universal antigravity produced by the cosmic vacuum.

  8. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems

    PubMed Central

    Wu, Jun; Su, Zhou; Li, Jianhua

    2017-01-01

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943

  9. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.

    PubMed

    Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua

    2017-07-30

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.

  10. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030

    PubMed Central

    Slotnick, Jeffrey P.; Khodadoust, Abdollah; Alonso, Juan J.; Darmofal, David L.; Gropp, William D.; Lurie, Elizabeth A.; Mavriplis, Dimitri J.; Venkatakrishnan, Venkat

    2014-01-01

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be ‘cleaner’ and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. PMID:25024413

  11. Computational steering of GEM based detector simulations

    NASA Astrophysics Data System (ADS)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  12. Predicting Flows of Rarefied Gases

    NASA Technical Reports Server (NTRS)

    LeBeau, Gerald J.; Wilmoth, Richard G.

    2005-01-01

    DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.

  13. Evaluating the Influence of the Client Behavior in Cloud Computing.

    PubMed

    Souza Pardo, Mário Henrique; Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system.

  14. Evaluating the Influence of the Client Behavior in Cloud Computing

    PubMed Central

    Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system. PMID:27441559

  15. Structural Technology Evaluation and Analysis Program (STEAP). Delivery Order 0037: Prognosis-Based Control Reconfiguration for an Aircraft with Faulty Actuator to Enable Performance in a Degraded State

    DTIC Science & Technology

    2010-12-01

    computers in 1953. HIL motion simulators were also built for the dynamic testing of vehicle com- ponents (e.g. suspensions, bodies ) with hydraulic or...complex, comprehensive mechanical systems can be simulated in real-time by parallel computers; examples include multi- body sys- tems, brake systems...hard constraints in a multivariable control framework. And the third aspect is the ability to perform online optimization. These aspects results in

  16. FLAME: A platform for high performance computing of complex systems, applied for three case studies

    DOE PAGES

    Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...

    2011-01-01

    FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.

  17. Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2016-09-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.

  18. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  19. The Computing And Interdisciplinary Systems Office: Annual Review and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2003-01-01

    The goal of this research is to develop an advanced engineering analysis system that enables high-fidelity, multi-disciplinary, full propulsion system simulations to be performed early in the design process (a virtual test cell that integrates propulsion and information technologies). This will enable rapid, high-confidence, cost-effective design of revolutionary systems.

  20. Computer Aided Battery Engineering Consortium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pesaran, Ahmad

    A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less

  1. Elucidating reaction mechanisms on quantum computers.

    PubMed

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  2. Elucidating reaction mechanisms on quantum computers

    PubMed Central

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  3. Elucidating reaction mechanisms on quantum computers

    NASA Astrophysics Data System (ADS)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  4. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    PubMed

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  5. Use of computational fluid dynamics in respiratory medicine.

    PubMed

    Fernández Tena, Ana; Casan Clarà, Pere

    2015-06-01

    Computational Fluid Dynamics (CFD) is a computer-based tool for simulating fluid movement. The main advantages of CFD over other fluid mechanics studies include: substantial savings in time and cost, the analysis of systems or conditions that are very difficult to simulate experimentally (as is the case of the airways), and a practically unlimited level of detail. We used the Ansys-Fluent CFD program to develop a conducting airway model to simulate different inspiratory flow rates and the deposition of inhaled particles of varying diameters, obtaining results consistent with those reported in the literature using other procedures. We hope this approach will enable clinicians to further individualize the treatment of different respiratory diseases. Copyright © 2014 SEPAR. Published by Elsevier Espana. All rights reserved.

  6. PLYMAP : a computer simulation model of the rotary peeled softwood plywood manufacturing process

    Treesearch

    Henry Spelter

    1990-01-01

    This report documents a simulation model of the plywood manufacturing process. Its purpose is to enable a user to make quick estimates of the economic impact of a particular process change within a mill. The program was designed to simulate the processing of plywood within a relatively simplified mill design. Within that limitation, however, it allows a wide range of...

  7. Wind Energy Modeling and Simulation | Wind | NREL

    Science.gov Websites

    Wind Energy Modeling and Simulation Wind Turbine Modeling and Simulation Wind turbines are unique wind turbines. It enables the analysis of a range of wind turbine configurations, including: Two- or (SOWFA) employs computational fluid dynamics to allow users to investigate wind turbine and wind power

  8. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    PubMed Central

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined. PMID:22172142

  9. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  10. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  11. Nonlinearity in Social Service Evaluation: A Primer on Agent-Based Modeling

    ERIC Educational Resources Information Center

    Israel, Nathaniel; Wolf-Branigin, Michael

    2011-01-01

    Measurement of nonlinearity in social service research and evaluation relies primarily on spatial analysis and, to a lesser extent, social network analysis. Recent advances in geographic methods and computing power, however, allow for the greater use of simulation methods. These advances now enable evaluators and researchers to simulate complex…

  12. Vectors in Use in a 3D Juggling Game Simulation

    ERIC Educational Resources Information Center

    Kynigos, Chronis; Latsi, Maria

    2006-01-01

    The new representations enabled by the educational computer game the "Juggler" can place vectors in a central role both for controlling and measuring the behaviours of objects in a virtual environment simulating motion in three-dimensional spaces. The mathematical meanings constructed by 13 year-old students in relation to vectors as…

  13. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  14. Mathematical modelling and simulation of a tennis racket.

    PubMed

    Brannigan, M; Adali, S

    1981-01-01

    By constructing a mathematical model, we consider the dynamics of a tennis racket hit by a ball. Using this model, known experimental results can be simulated on the computer, and it becomes possible to make a parametric study of a racket. Such a simulation is essential in the study of two important problems related to tennis: computation of the resulting forces and moments transferred to the hand should assist understanding of the medical problem 'tennis elbow'; secondly, simulation will enable a study to be made of the relationships between the impact time, tension in the strings, forces transmitted to the rim and return velocity of the ball, all of which can lead to the optimal design of rackets.

  15. Thermodynamics of Hydrophobic Amino Acids in Solution: A Combined Experimental–Computational Study

    DOE PAGES

    Song, Lingshuang; Yang, Lin; Meng, Jie; ...

    2016-12-29

    Here, we present a joint experimental-computational study to quantitatively describe the thermodynamics of hydrophobic leucine amino acids in aqueous solution. X-ray scattering data were acquired at a series of solute and salt concentrations to effectively measure inter-leucine interactions, indicating that a major scattering peak is observed consistently at q = 0.83 Å -1. Atomistic molecular dynamics simulations were then performed and compared with the scattering data, achieving high consistency at both small and wider scattering angles (q = 0$-$1.5 Å -1). This experimental-computational consistence enables a first glimpse of the leucineleucine interacting landscape, where two leucine molecules are aligned mostlymore » in a parallel fashion, as opposed to anti-parallel, but also allows us to derive effective leucine-leucine interactions in solution. Collectively, this combined approach of employing experimental scattering and molecular simulation enables a quantitative characterization on effective inter-molecular interactions of hydrophobic amino acids, critical for protein function and dynamics such as protein folding.« less

  16. Thermodynamics of Hydrophobic Amino Acids in Solution: A Combined Experimental–Computational Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Lingshuang; Yang, Lin; Meng, Jie

    Here, we present a joint experimental-computational study to quantitatively describe the thermodynamics of hydrophobic leucine amino acids in aqueous solution. X-ray scattering data were acquired at a series of solute and salt concentrations to effectively measure inter-leucine interactions, indicating that a major scattering peak is observed consistently at q = 0.83 Å -1. Atomistic molecular dynamics simulations were then performed and compared with the scattering data, achieving high consistency at both small and wider scattering angles (q = 0$-$1.5 Å -1). This experimental-computational consistence enables a first glimpse of the leucineleucine interacting landscape, where two leucine molecules are aligned mostlymore » in a parallel fashion, as opposed to anti-parallel, but also allows us to derive effective leucine-leucine interactions in solution. Collectively, this combined approach of employing experimental scattering and molecular simulation enables a quantitative characterization on effective inter-molecular interactions of hydrophobic amino acids, critical for protein function and dynamics such as protein folding.« less

  17. A Progressive Damage Model for unidirectional Fibre Reinforced Composites with Application to Impact and Penetration Simulation

    NASA Astrophysics Data System (ADS)

    Kerschbaum, M.; Hopmann, C.

    2016-06-01

    The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.

  18. A Simple Memristor Model for Circuit Simulations

    NASA Astrophysics Data System (ADS)

    Fullerton, Farrah-Amoy; Joe, Aaleyah; Gergel-Hackett, Nadine; Department of Chemistry; Physics Team

    This work describes the development of a model for the memristor, a novel nanoelectronic technology. The model was designed to replicate the real-world electrical characteristics of previously fabricated memristor devices, but was constructed with basic circuit elements using a free widely available circuit simulator, LT Spice. The modeled memrsistors were then used to construct a circuit that performs material implication. Material implication is a digital logic that can be used to perform all of the same basic functions as traditional CMOS gates, but with fewer nanoelectronic devices. This memristor-based digital logic could enable memristors' use in new paradigms of computer architecture with advantages in size, speed, and power over traditional computing circuits. Additionally, the ability to model the real-world electrical characteristics of memristors in a free circuit simulator using its standard library of elements could enable not only the development of memristor material implication, but also the development of a virtually unlimited array of other memristor-based circuits.

  19. The Cell Collective: Toward an open and collaborative approach to systems biology

    PubMed Central

    2012-01-01

    Background Despite decades of new discoveries in biomedical research, the overwhelming complexity of cells has been a significant barrier to a fundamental understanding of how cells work as a whole. As such, the holistic study of biochemical pathways requires computer modeling. Due to the complexity of cells, it is not feasible for one person or group to model the cell in its entirety. Results The Cell Collective is a platform that allows the world-wide scientific community to create these models collectively. Its interface enables users to build and use models without specifying any mathematical equations or computer code - addressing one of the major hurdles with computational research. In addition, this platform allows scientists to simulate and analyze the models in real-time on the web, including the ability to simulate loss/gain of function and test what-if scenarios in real time. Conclusions The Cell Collective is a web-based platform that enables laboratory scientists from across the globe to collaboratively build large-scale models of various biological processes, and simulate/analyze them in real time. In this manuscript, we show examples of its application to a large-scale model of signal transduction. PMID:22871178

  20. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    USGS Publications Warehouse

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf

  1. Computer simulations in the high school: students' cognitive stages, science process skills and academic achievement in microbiology

    NASA Astrophysics Data System (ADS)

    Huppert, J.; Michal Lomask, S.; Lazarowitz, R.

    2002-08-01

    Computer-assisted learning, including simulated experiments, has great potential to address the problem solving process which is a complex activity. It requires a highly structured approach in order to understand the use of simulations as an instructional device. This study is based on a computer simulation program, 'The Growth Curve of Microorganisms', which required tenth grade biology students to use problem solving skills whilst simultaneously manipulating three independent variables in one simulated experiment. The aims were to investigate the computer simulation's impact on students' academic achievement and on their mastery of science process skills in relation to their cognitive stages. The results indicate that the concrete and transition operational students in the experimental group achieved significantly higher academic achievement than their counterparts in the control group. The higher the cognitive operational stage, the higher students' achievement was, except in the control group where students in the concrete and transition operational stages did not differ. Girls achieved equally with the boys in the experimental group. Students' academic achievement may indicate the potential impact a computer simulation program can have, enabling students with low reasoning abilities to cope successfully with learning concepts and principles in science which require high cognitive skills.

  2. FastBit: Interactively Searching Massive Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reducesmore » the response time and enables interactive exploration on terabytes of data.« less

  3. Constructing high-quality bounding volume hierarchies for N-body computation using the acceptance volume heuristic

    NASA Astrophysics Data System (ADS)

    Olsson, O.

    2018-01-01

    We present a novel heuristic derived from a probabilistic cost model for approximate N-body simulations. We show that this new heuristic can be used to guide tree construction towards higher quality trees with improved performance over current N-body codes. This represents an important step beyond the current practice of using spatial partitioning for N-body simulations, and enables adoption of a range of state-of-the-art algorithms developed for computer graphics applications to yield further improvements in N-body simulation performance. We outline directions for further developments and review the most promising such algorithms.

  4. 3D Parallel Multigrid Methods for Real-Time Fluid Simulation

    NASA Astrophysics Data System (ADS)

    Wan, Feifei; Yin, Yong; Zhang, Suiyu

    2018-03-01

    The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.

  5. A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation

    NASA Astrophysics Data System (ADS)

    da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille

    2012-03-01

    Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strout, Michelle

    Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programsmore » through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.« less

  7. Simulation and analysis of a geopotential research mission

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.

    1987-01-01

    Computer simulations were performed for a Geopotential Research Mission (GRM) to enable the study of the gravitational sensitivity of the range rate measurements between the two satellites and to provide a set of simulated measurements to assist in the evaluation of techniques developed for the determination of the gravity field. The simulations were conducted with two satellites in near circular, frozen orbits at 160 km altitudes separated by 300 km. High precision numerical integration of the polar orbits were used with a gravitational field complete to degree and order 360. The set of simulated data for a mission duration of about 32 days was generated on a Cray X-MP computer. The results presented cover the most recent simulation, S8703, and includes a summary of the numerical integration of the simulated trajectories, a summary of the requirements to compute nominal reference trajectories to meet the initial orbit determination requirements for the recovery of the geopotential, an analysis of the nature of the one way integrated Doppler measurements associated with the simulation, and a discussion of the data set to be made available.

  8. Multi-scale simulations of space problems with iPIC3D

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano

    The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038

  9. ASC FY17 Implementation Plan, Rev. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, P. G.

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resources, including technical staff, hardware, simulation software, and computer science solutions.« less

  10. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030.

    PubMed

    Slotnick, Jeffrey P; Khodadoust, Abdollah; Alonso, Juan J; Darmofal, David L; Gropp, William D; Lurie, Elizabeth A; Mavriplis, Dimitri J; Venkatakrishnan, Venkat

    2014-08-13

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be 'cleaner' and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. BIT BY BIT: A Game Simulating Natural Language Processing in Computers

    ERIC Educational Resources Information Center

    Kato, Taichi; Arakawa, Chuichi

    2008-01-01

    BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  13. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.

    Enabled by petascale supercomputing, the next generation of computer models for wind energy will simulate a vast range of scales and physics, spanning from turbine structural dynamics and blade-scale turbulence to mesoscale atmospheric flow. A single model covering all scales and physics is not feasible. Thus, these simulations will require the coupling of different models/codes, each for different physics, interacting at their domain boundaries.

  15. Application of Simulation to Individualized Self-Paced Training. Final Report. TAEG Report No. 11-2.

    ERIC Educational Resources Information Center

    Lindahl, William H.; Gardner, James H.

    Computer simulation is recognized as a valuable systems analysis research tool which enables the detailed examination, evaluation, and manipulation, under stated conditions, of a system without direct action on the system. This technique provides management with quantitative data on system performance and capabilities which can be used to compare…

  16. Computational Fluid Dynamics of Whole-Body Aircraft

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh

    1999-01-01

    The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  17. Interactive physically-based sound simulation

    NASA Astrophysics Data System (ADS)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.

  18. Simulation training tools for nonlethal weapons using gaming environments

    NASA Astrophysics Data System (ADS)

    Donne, Alexsana; Eagan, Justin; Tse, Gabriel; Vanderslice, Tom; Woods, Jerry

    2006-05-01

    Modern simulation techniques have a growing role for evaluating new technologies and for developing cost-effective training programs. A mission simulator facilitates the productive exchange of ideas by demonstration of concepts through compellingly realistic computer simulation. Revolutionary advances in 3D simulation technology have made it possible for desktop computers to process strikingly realistic and complex interactions with results depicted in real-time. Computer games now allow for multiple real human players and "artificially intelligent" (AI) simulated robots to play together. Advances in computer processing power have compensated for the inherent intensive calculations required for complex simulation scenarios. The main components of the leading game-engines have been released for user modifications, enabling game enthusiasts and amateur programmers to advance the state-of-the-art in AI and computer simulation technologies. It is now possible to simulate sophisticated and realistic conflict situations in order to evaluate the impact of non-lethal devices as well as conflict resolution procedures using such devices. Simulations can reduce training costs as end users: learn what a device does and doesn't do prior to use, understand responses to the device prior to deployment, determine if the device is appropriate for their situational responses, and train with new devices and techniques before purchasing hardware. This paper will present the status of SARA's mission simulation development activities, based on the Half-Life gameengine, for the purpose of evaluating the latest non-lethal weapon devices, and for developing training tools for such devices.

  19. Modeling Pilot Behavior for Assessing Integrated Alert and Notification Systems on Flight Decks

    NASA Technical Reports Server (NTRS)

    Cover, Mathew; Schnell, Thomas

    2010-01-01

    Numerous new flight deck configurations for caution, warning, and alerts can be conceived; yet testing them with human-in-the-Ioop experiments to evaluate each one would not be practical. New sensors, instruments, and displays are being put into cockpits every day and this is particularly true as we enter the dawn of the Next Generation Air Transportation System (NextGen). By modeling pilot behavior in a computer simulation, an unlimited number of unique caution, warning, and alert configurations can be evaluated 24/7 by a computer. These computer simulations can then identify the most promising candidate formats to further evaluate in higher fidelity, but more costly, Human-in-the-Ioop (HITL) simulations. Evaluations using batch simulations with human performance models saves time, money, and enables a broader consideration of possible caution, warning, and alerting configurations for future flight decks.

  20. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.

  1. Method for Identification of Results of Dynamic Overloads in Assessment of Safety Use of the Mine Auxiliary Transportation System

    NASA Astrophysics Data System (ADS)

    Tokarczyk, Jarosław

    2016-12-01

    Method for identification the effects of dynamic overload affecting the people, which may occur in the emergency state of suspended monorail is presented in the paper. The braking curve using MBS (Multi-Body System) simulation was determined. For this purpose a computational model (MBS) of suspended monorail was developed and two different variants of numerical calculations were carried out. An algorithm of conducting numerical simulations to assess the effects of dynamic overload acting on the suspended monorails' users is also posted in the paper. An example of computational model FEM (Finite Element Method) composed of technical mean and the anthropometrical model ATB (Articulated Total Body) is shown. The simulation results are presented: graph of HIC (Head Injury Criterion) parameter and successive phases of dislocation of ATB model. Generator of computational models for safety criterion, which enables preparation of input data and remote starting the simulation, is proposed.

  2. Towards programmable plant genetic circuits.

    PubMed

    Medford, June I; Prasad, Ashok

    2016-07-01

    Synthetic biology enables the construction of genetic circuits with predictable gene functions in plants. Detailed quantitative descriptions of the transfer function or input-output function for genetic parts (promoters, 5' and 3' untranslated regions, etc.) are collected. These data are then used in computational simulations to determine their robustness and desired properties, thereby enabling the best components to be selected for experimental testing in plants. In addition, the process forms an iterative workflow which allows vast improvement to validated elements with sub-optimal function. These processes enable computational functions such as digital logic in living plants and follow the pathway of technological advances which took us from vacuum tubes to cell phones. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  3. Computational modeling of cardiac hemodynamics: Current status and future outlook

    NASA Astrophysics Data System (ADS)

    Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.

    2016-01-01

    The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.

  4. Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Keren

    Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less

  5. Closed-Loop HIRF Experiments Performed on a Fault Tolerant Flight Control Computer

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1997-01-01

    ABSTRACT Closed-loop HIRF experiments were performed on a fault tolerant flight control computer (FCC) at the NASA Langley Research Center. The FCC used in the experiments was a quad-redundant flight control computer executing B737 Autoland control laws. The FCC was placed in one of the mode-stirred reverberation chambers in the HIRF Laboratory and interfaced to a computer simulation of the B737 flight dynamics, engines, sensors, actuators, and atmosphere in the Closed-Loop Systems Laboratory. Disturbances to the aircraft associated with wind gusts and turbulence were simulated during tests. Electrical isolation between the FCC under test and the simulation computer was achieved via a fiber optic interface for the analog and discrete signals. Closed-loop operation of the FCC enabled flight dynamics and atmospheric disturbances affecting the aircraft to be represented during tests. Upset was induced in the FCC as a result of exposure to HIRF, and the effect of upset on the simulated flight of the aircraft was observed and recorded. This paper presents a description of these closed- loop HIRF experiments, upset data obtained from the FCC during these experiments, and closed-loop effects on the simulated flight of the aircraft.

  6. Simulation and analysis of a geopotential research mission

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.

    1986-01-01

    A computer simulation was performed for a Geopotential Research Mission (GRM) to enable study of the gravitational sensitivity of the range/rate measurement between two satellites and to provide a set of simulated measurements to assist in the evaluation of techniques developed for the determination of the gravity field. The simulation, identified as SGRM 8511, was conducted with two satellites in near circular, frozen orbits at 160 km altitude and separated by 300 km. High precision numerical integration of the polar orbits was used with a gravitational field complete to degree and order 180 coefficients and to degree 300 in orders 0 to 10. The set of simulated data for a mission duration of about 32 days was generated on a Cray X-MP computer. The characteristics of the simulation and the nature of the results are described.

  7. A Solution Framework for Environmental Characterization Problems

    EPA Science Inventory

    This paper describes experiences developing a grid-enabled framework for solving environmental inverse problems. The solution approach taken here couples environmental simulation models with global search methods and requires readily available computational resources of the grid ...

  8. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.

  9. A Fourier-based total-field/scattered-field technique for three-dimensional broadband simulations of elastic targets near a water-sand interface.

    PubMed

    Shao, Yu; Wang, Shumin

    2016-12-01

    The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.

  10. Improving Fidelity of Launch Vehicle Liftoff Acoustic Simulations

    NASA Technical Reports Server (NTRS)

    Liever, Peter; West, Jeff

    2016-01-01

    Launch vehicles experience high acoustic loads during ignition and liftoff affected by the interaction of rocket plume generated acoustic waves with launch pad structures. Application of highly parallelized Computational Fluid Dynamics (CFD) analysis tools optimized for application on the NAS computer systems such as the Loci/CHEM program now enable simulation of time-accurate, turbulent, multi-species plume formation and interaction with launch pad geometry and capture the generation of acoustic noise at the source regions in the plume shear layers and impingement regions. These CFD solvers are robust in capturing the acoustic fluctuations, but they are too dissipative to accurately resolve the propagation of the acoustic waves throughout the launch environment domain along the vehicle. A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed to improve such liftoff acoustic environment predictions. The framework combines the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin (DG) solver, Loci/THRUST, developed in the same computational framework. Loci/THRUST employs a low dissipation, high-order, unstructured DG method to accurately propagate acoustic waves away from the source regions across large distances. The DG solver is currently capable of solving up to 4th order solutions for non-linear, conservative acoustic field propagation. Higher order boundary conditions are implemented to accurately model the reflection and refraction of acoustic waves on launch pad components. The DG solver accepts generalized unstructured meshes, enabling efficient application of common mesh generation tools for CHEM and THRUST simulations. The DG solution is coupled with the CFD solution at interface boundaries placed near the CFD acoustic source regions. Both simulations are executed simultaneously with coordinated boundary condition data exchange.

  11. Collaborative Simulation Grid: Multiscale Quantum-Mechanical/Classical Atomistic Simulations on Distributed PC Clusters in the US and Japan

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash; hide

    2002-01-01

    A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.

  12. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  13. Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Alejandro

    2017-12-01

    Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.

  14. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  15. Towards the simulation of molecular collisions with a superconducting quantum computer

    NASA Astrophysics Data System (ADS)

    Geller, Michael

    2013-05-01

    I will discuss the prospects for the use of large-scale, error-corrected quantum computers to simulate complex quantum dynamics such as molecular collisions. This will likely require millions qubits. I will also discuss an alternative approach [M. R. Geller et al., arXiv:1210.5260] that is ideally suited for today's superconducting circuits, which uses the single-excitation subspace (SES) of a system of n tunably coupled qubits. The SES method allows many operations in the unitary group SU(n) to be implemented in a single step, bypassing the need for elementary gates, thereby making large computations possible without error correction. The method enables universal quantum simulation, including simulation of the time-dependent Schrodinger equation, and we argue that a 1000-qubit SES processor should be capable of achieving quantum speedup relative to a petaflop supercomputer. We speculate on the utility and practicality of such a simulator for atomic and molecular collision physics. Work supported by the US National Science Foundation CDI program.

  16. Robust Real-Time Musculoskeletal Modeling Driven by Electromyograms.

    PubMed

    Durandau, Guillaume; Farina, Dario; Sartori, Massimo

    2018-03-01

    Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man-machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.

  17. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  18. Three-Dimensional Liver Surgery Simulation: Computer-Assisted Surgical Planning with Three-Dimensional Simulation Software and Three-Dimensional Printing.

    PubMed

    Oshiro, Yukio; Ohkohchi, Nobuhiro

    2017-06-01

    To perform accurate hepatectomy without injury, it is necessary to understand the anatomical relationship among the branches of Glisson's sheath, hepatic veins, and tumor. In Japan, three-dimensional (3D) preoperative simulation for liver surgery is becoming increasingly common, and liver 3D modeling and 3D hepatectomy simulation by 3D analysis software for liver surgery have been covered by universal healthcare insurance since 2012. Herein, we review the history of virtual hepatectomy using computer-assisted surgery (CAS) and our research to date, and we discuss the future prospects of CAS. We have used the SYNAPSE VINCENT medical imaging system (Fujifilm Medical, Tokyo, Japan) for 3D visualization and virtual resection of the liver since 2010. We developed a novel fusion imaging technique combining 3D computed tomography (CT) with magnetic resonance imaging (MRI). The fusion image enables us to easily visualize anatomic relationships among the hepatic arteries, portal veins, bile duct, and tumor in the hepatic hilum. In 2013, we developed an original software, called Liversim, which enables real-time deformation of the liver using physical simulation, and a randomized control trial has recently been conducted to evaluate the use of Liversim and SYNAPSE VINCENT for preoperative simulation and planning. Furthermore, we developed a novel hollow 3D-printed liver model whose surface is covered with frames. This model is useful for safe liver resection, has better visibility, and the production cost is reduced to one-third of a previous model. Preoperative simulation and navigation with CAS in liver resection are expected to help planning and conducting a surgery and surgical education. Thus, a novel CAS system will contribute to not only the performance of reliable hepatectomy but also to surgical education.

  19. Performance analysis of a parallel Monte Carlo code for simulating solar radiative transfer in cloudy atmospheres using CUDA-enabled NVIDIA GPU

    NASA Astrophysics Data System (ADS)

    Russkova, Tatiana V.

    2017-11-01

    One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.

  20. Simulation Enabled Safeguards Assessment Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less

  1. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  2. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  3. Advances in Cross-Cutting Ideas for Computational Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, Esmond; Evans, Katherine J.; Caldwell, Peter

    This report presents results from the DOE-sponsored workshop titled, ``Advancing X-Cutting Ideas for Computational Climate Science Workshop,'' known as AXICCS, held on September 12--13, 2016 in Rockville, MD. The workshop brought together experts in climate science, computational climate science, computer science, and mathematics to discuss interesting but unsolved science questions regarding climate modeling and simulation, promoted collaboration among the diverse scientists in attendance, and brainstormed about possible tools and capabilities that could be developed to help address them. Emerged from discussions at the workshop were several research opportunities that the group felt could advance climate science significantly. These include (1)more » process-resolving models to provide insight into important processes and features of interest and inform the development of advanced physical parameterizations, (2) a community effort to develop and provide integrated model credibility, (3) including, organizing, and managing increasingly connected model components that increase model fidelity yet complexity, and (4) treating Earth system models as one interconnected organism without numerical or data based boundaries that limit interactions. The group also identified several cross-cutting advances in mathematics, computer science, and computational science that would be needed to enable one or more of these big ideas. It is critical to address the need for organized, verified, and optimized software, which enables the models to grow and continue to provide solutions in which the community can have confidence. Effectively utilizing the newest computer hardware enables simulation efficiency and the ability to handle output from increasingly complex and detailed models. This will be accomplished through hierarchical multiscale algorithms in tandem with new strategies for data handling, analysis, and storage. These big ideas and cross-cutting technologies for enabling breakthrough climate simulation advancements also need the "glue" of outreach and learning across the scientific domains to be successful. The workshop identified several strategies to allow productive, continuous engagement across those who have a broad knowledge of the various angles of the problem. Specific ideas to foster education and tools to make material progress were discussed. Examples include follow-on cross-cutting meetings that enable unstructured discussions of the types this workshop fostered. A concerted effort to recruit undergraduate and graduate students from all relevant domains and provide them experience, training, and networking across their immediate expertise is needed. This will broaden and expand their exposure to the future needs and solutions, and provide a pipeline of scientists with a diversity of knowledge and know-how. Providing real-world experience with subject matter experts from multiple angles may also motivate the students to attack these problems and even come up with the missing solutions.« less

  4. Advances in Cross-Cutting Ideas for Computational Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, E.; Evans, K.; Caldwell, P.

    This report presents results from the DOE-sponsored workshop titled, Advancing X-Cutting Ideas for Computational Climate Science Workshop,'' known as AXICCS, held on September 12--13, 2016 in Rockville, MD. The workshop brought together experts in climate science, computational climate science, computer science, and mathematics to discuss interesting but unsolved science questions regarding climate modeling and simulation, promoted collaboration among the diverse scientists in attendance, and brainstormed about possible tools and capabilities that could be developed to help address them. Emerged from discussions at the workshop were several research opportunities that the group felt could advance climate science significantly. These include (1)more » process-resolving models to provide insight into important processes and features of interest and inform the development of advanced physical parameterizations, (2) a community effort to develop and provide integrated model credibility, (3) including, organizing, and managing increasingly connected model components that increase model fidelity yet complexity, and (4) treating Earth system models as one interconnected organism without numerical or data based boundaries that limit interactions. The group also identified several cross-cutting advances in mathematics, computer science, and computational science that would be needed to enable one or more of these big ideas. It is critical to address the need for organized, verified, and optimized software, which enables the models to grow and continue to provide solutions in which the community can have confidence. Effectively utilizing the newest computer hardware enables simulation efficiency and the ability to handle output from increasingly complex and detailed models. This will be accomplished through hierarchical multiscale algorithms in tandem with new strategies for data handling, analysis, and storage. These big ideas and cross-cutting technologies for enabling breakthrough climate simulation advancements also need the "glue" of outreach and learning across the scientific domains to be successful. The workshop identified several strategies to allow productive, continuous engagement across those who have a broad knowledge of the various angles of the problem. Specific ideas to foster education and tools to make material progress were discussed. Examples include follow-on cross-cutting meetings that enable unstructured discussions of the types this workshop fostered. A concerted effort to recruit undergraduate and graduate students from all relevant domains and provide them experience, training, and networking across their immediate expertise is needed. This will broaden and expand their exposure to the future needs and solutions, and provide a pipeline of scientists with a diversity of knowledge and know-how. Providing real-world experience with subject matter experts from multiple angles may also motivate the students to attack these problems and even come up with the missing solutions.« less

  5. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  6. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  7. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  8. Faster and exact implementation of the continuous cellular automaton for anisotropic etching simulations

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-02-01

    The current success of the continuous cellular automata for the simulation of anisotropic wet chemical etching of silicon in microengineering applications is based on a relatively fast, approximate, constant time stepping implementation (CTS), whose accuracy against the exact algorithm—a computationally slow, variable time stepping implementation (VTS)—has not been previously analyzed in detail. In this study we show that the CTS implementation can generate moderately wrong etch rates and overall etching fronts, thus justifying the presentation of a novel, exact reformulation of the VTS implementation based on a new state variable, referred to as the predicted removal time (PRT), and the use of a self-balanced binary search tree that enables storage and efficient access to the PRT values in each time step in order to quickly remove the corresponding surface atom/s. The proposed PRT method reduces the simulation cost of the exact implementation from {O}(N^{5/3}) to {O}(N^{3/2} log N) without introducing any model simplifications. This enables more precise simulations (only limited by numerical precision errors) with affordable computational times that are similar to the less precise CTS implementation and even faster for low reactivity systems.

  9. HPC Annual Report 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennig, Yasmin

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less

  10. Haptic feedback improves surgeons' user experience and fracture reduction in facial trauma simulation.

    PubMed

    Girod, Sabine; Schvartzman, Sara C; Gaudilliere, Dyani; Salisbury, Kenneth; Silva, Rebeka

    2016-01-01

    Computer-assisted surgical (CAS) planning tools are available for craniofacial surgery, but are usually based on computer-aided design (CAD) tools that lack the ability to detect the collision of virtual objects (i.e., fractured bone segments). We developed a CAS system featuring a sense of touch (haptic) that enables surgeons to physically interact with individual, patient-specific anatomy and immerse in a three-dimensional virtual environment. In this study, we evaluated initial user experience with our novel system compared to an existing CAD system. Ten surgery resident trainees received a brief verbal introduction to both the haptic and CAD systems. Users simulated mandibular fracture reduction in three clinical cases within a 15 min time limit for each system and completed a questionnaire to assess their subjective experience. We compared standard landmarks and linear and angular measurements between the simulated results and the actual surgical outcome and found that haptic simulation results were not significantly different from actual postoperative outcomes. In contrast, CAD results significantly differed from both the haptic simulation and actual postoperative results. In addition to enabling a more accurate fracture repair, the haptic system provided a better user experience than the CAD system in terms of intuitiveness and self-reported quality of repair.

  11. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  12. Computational Fluid Dynamics and Additive Manufacturing to Diagnose and Treat Cardiovascular Disease.

    PubMed

    Randles, Amanda; Frakes, David H; Leopold, Jane A

    2017-11-01

    Noninvasive engineering models are now being used for diagnosing and planning the treatment of cardiovascular disease. Techniques in computational modeling and additive manufacturing have matured concurrently, and results from simulations can inform and enable the design and optimization of therapeutic devices and treatment strategies. The emerging synergy between large-scale simulations and 3D printing is having a two-fold benefit: first, 3D printing can be used to validate the complex simulations, and second, the flow models can be used to improve treatment planning for cardiovascular disease. In this review, we summarize and discuss recent methods and findings for leveraging advances in both additive manufacturing and patient-specific computational modeling, with an emphasis on new directions in these fields and remaining open questions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries

    NASA Astrophysics Data System (ADS)

    Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh

    2018-04-01

    Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.

  14. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    NASA Astrophysics Data System (ADS)

    Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.

    1995-03-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.

  15. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, S.; Zacharia, T.; Baltas, N.

    1995-04-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less

  16. Self-consistent field theory simulations of polymers on arbitrary domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouaknin, Gaddiel, E-mail: gaddielouaknin@umail.ucsb.edu; Laachi, Nabil; Delaney, Kris

    2016-12-15

    We introduce a framework for simulating the mesoscale self-assembly of block copolymers in arbitrary confined geometries subject to Neumann boundary conditions. We employ a hybrid finite difference/volume approach to discretize the mean-field equations on an irregular domain represented implicitly by a level-set function. The numerical treatment of the Neumann boundary conditions is sharp, i.e. it avoids an artificial smearing in the irregular domain boundary. This strategy enables the study of self-assembly in confined domains and enables the computation of physically meaningful quantities at the domain interface. In addition, we employ adaptive grids encoded with Quad-/Oc-trees in parallel to automatically refinemore » the grid where the statistical fields vary rapidly as well as at the boundary of the confined domain. This approach results in a significant reduction in the number of degrees of freedom and makes the simulations in arbitrary domains using effective boundary conditions computationally efficient in terms of both speed and memory requirement. Finally, in the case of regular periodic domains, where pseudo-spectral approaches are superior to finite differences in terms of CPU time and accuracy, we use the adaptive strategy to store chain propagators, reducing the memory footprint without loss of accuracy in computed physical observables.« less

  17. eLoom and Flatland: specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures.

    PubMed

    Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J

    2003-01-01

    eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.

  18. Discerning Trends in Performance Across Multiple Events

    NASA Technical Reports Server (NTRS)

    Slater, Simon; Hiltz, Mike; Rice, Craig

    2006-01-01

    Mass Data is a computer program that enables rapid, easy discernment of trends in performance data across multiple flights and ground tests. The program can perform Fourier analysis and other functions for the purposes of frequency analysis and trending of all variables. These functions facilitate identification of past use of diagnosed systems and of anomalies in such systems, and enable rapid assessment of related current problems. Many variables, for computation of which it is usually necessary to perform extensive manual manipulation of raw downlist data, are automatically computed and made available to all users, regularly eliminating the need for what would otherwise be an extensive amount of engineering analysis. Data from flight, ground test, and simulation are preprocessed and stored in one central location for instantaneous access and comparison for diagnostic and trending purposes. Rules are created so that an event log is created for every flight, making it easy to locate information on similar maneuvers across many flights. The same rules can be created for test sets and simulations, and are searchable, so that information on like events is easily accessible.

  19. Simulating Technology Processes to Foster Learning.

    ERIC Educational Resources Information Center

    Krumholtz, Nira

    1998-01-01

    Based on a spiral model of technology evolution, elementary students used LOGO computer software to become both developers and users of technology. The computerized environment enabled 87% to reach intuitive understanding of physical concepts; 24% expressed more formal scientific understanding. (SK)

  20. GPU-based Green's function simulations of shear waves generated by an applied acoustic radiation force in elastic and viscoelastic models.

    PubMed

    Yang, Yiqun; Urban, Matthew W; McGough, Robert J

    2018-05-15

    Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green's functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green's function approach are ideally suited for high-performance GPUs.

  1. Materials by numbers: Computations as tools of discovery

    PubMed Central

    Landman, Uzi

    2005-01-01

    Current issues pertaining to theoretical simulations of materials, with a focus on systems of nanometer-scale dimensions, are discussed. The use of atomistic simulations as high-resolution numerical experiments, enabling and guiding formulation and testing of analytic theoretical descriptions, is demonstrated through studies of the generation and breakup of nanojets, which have led to the derivation of a stochastic hydrodynamic description. Subsequently, I illustrate the use of computations and simulations as tools of discovery, with examples that include the self-organized formation of nanowires, the surprising nanocatalytic activity of small aggregates of gold that, in the bulk form, is notorious for being chemically inert, and the emergence of rotating electron molecules in two-dimensional quantum dots. I conclude with a brief discussion of some key challenges in nanomaterials simulations. PMID:15870210

  2. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  3. Rapid communication: Computational simulation and analysis of a candidate for the design of a novel silk-based biopolymer.

    PubMed

    Golas, Ewa I; Czaplewski, Cezary

    2014-09-01

    This work theoretically investigates the mechanical properties of a novel silk-derived biopolymer as polymerized in silico from sericin and elastin-like monomers. Molecular Dynamics simulations and Steered Molecular Dynamics were the principal computational methods used, the latter of which applies an external force onto the system and thereby enables an observation of its response to stress. The models explored herein are single-molecule approximations, and primarily serve as tools in a rational design process for the preliminary assessment of properties in a new material candidate. © 2014 Wiley Periodicals, Inc.

  4. A new ChainMail approach for real-time soft tissue simulation.

    PubMed

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2016-07-03

    This paper presents a new ChainMail method for real-time soft tissue simulation. This method enables the use of different material properties for chain elements to accommodate various materials. Based on the ChainMail bounding region, a new time-saving scheme is developed to improve computational efficiency for isotropic materials. The proposed method also conserves volume and strain energy. Experimental results demonstrate that the proposed ChainMail method can not only accommodate isotropic, anisotropic and heterogeneous materials but also model incompressibility and relaxation behaviors of soft tissues. Further, the proposed method can achieve real-time computational performance.

  5. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  6. GROMACS 4:  Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.

    PubMed

    Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik

    2008-03-01

    Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

  7. Why not make a PC cluster of your own? 5. AppleSeed: A Parallel Macintosh Cluster for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Dauger, Dean E.

    We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.

  8. GillesPy: A Python Package for Stochastic Model Building and Simulation.

    PubMed

    Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R

    2016-09-01

    GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.

  9. A Standard Kinematic Model for Flight Simulation at NASA Ames

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1975-01-01

    A standard kinematic model for aircraft simulation exists at NASA-Ames on a variety of computer systems, one of which is used to control the flight simulator for advanced aircraft (FSAA). The derivation of the kinematic model is given and various mathematical relationships are presented as a guide. These include descriptions of standardized simulation subsystems such as the atmospheric turbulence model and the generalized six-degrees-of-freedom trim routine, as well as an introduction to the emulative batch-processing system which enables this facility to optimize its real-time environment.

  10. GillesPy: A Python Package for Stochastic Model Building and Simulation

    PubMed Central

    Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.

    2017-01-01

    GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community. PMID:28630888

  11. Improved haptic interface for colonoscopy simulation.

    PubMed

    Woo, Hyun Soo; Kim, Woo Seok; Ahn, Woojin; Lee, Doo Yong; Yi, Sun Young

    2007-01-01

    This paper presents an improved haptic interface of the KAIST-Ewha colonoscopy simulator II. The haptic interface enables the distal portion of the colonoscope to be freely bent while guaranteeing enough workspace and reflective forces for colonoscopy simulation. Its force-torque sensor measures profiles of the user. Manipulation of the colonoscope tip is monitored by four deflection sensors, and triggers computation to render accurate graphic images corresponding to the angle knob rotation. Tack switches are attached on the valve-actuation buttons of the colonoscope to simulate air-injection or suction, and the corresponding deformation of the colon.

  12. Existing Fortran interfaces to Trilinos in preparation for exascale ForTrilinos development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J.; Young, Mitchell T.; Collins, Benjamin S.

    This report summarizes the current state of Fortran interfaces to the Trilinos library within several key applications of the Exascale Computing Program (ECP), with the aim of informing developers about strategies to develop ForTrilinos, an exascale-ready, Fortran interface software package within Trilinos. The two software projects assessed within are the DOE Office of Science's Accelerated Climate Model for Energy (ACME) atmosphere component, CAM, and the DOE Office of Nuclear Energy's core-simulator portion of VERA, a nuclear reactor simulation code. Trilinos is an object-oriented, C++ based software project, and spans a collection of algorithms and other enabling technologies such as uncertaintymore » quantification and mesh generation. To date, Trilinos has enabled these codes to achieve large-scale simulation results, however the simulation needs of CAM and VERA-CS will approach exascale over the next five years. A Fortran interface to Trilinos that enables efficient use of programming models and more advanced algorithms is necessary. Where appropriate, the needs of the CAM and VERA-CS software to achieve their simulation goals are called out specifically. With this report, a design document and execution plan for ForTrilinos development can proceed.« less

  13. The change in critical technologies for computational physics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1990-01-01

    It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.

  14. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  15. Design of a dynamic optical tissue phantom to model extravasation pharmacokinetics

    NASA Astrophysics Data System (ADS)

    Zhang, Jane Y.; Ergin, Aysegul; Andken, Kerry Lee; Sheng, Chao; Bigio, Irving J.

    2010-02-01

    We describe an optical tissue phantom that enables the simulation of drug extravasation from microvessels and validates computational compartmental models of drug delivery. The phantom consists of a microdialysis tubing bundle to simulate the permeable blood vessels, immersed in either an aqueous suspension of titanium dioxide (TiO2) or a TiO2 mixed agarose scattering medium. Drug administration is represented by a dye circulated through this porous microdialysis tubing bundle. Optical pharmacokinetic (OP) methods are used to measure changes in the absorption coefficient of the scattering medium due to the arrival and diffusion of the dye. We have established particle sizedependent concentration profiles over time of phantom drug delivery by intravenous (IV) and intra-arterial (IA) routes. Additionally, pharmacokinetic compartmental models are implemented in computer simulations for the conditions studied within the phantom. The simulated concentration-time profiles agree well with measurements from the phantom. The results are encouraging for future optical pharmacokinetic method development, both physical and computational, to understand drug extravasation under various physiological conditions.

  16. 'Towers in the Tempest' Computer Animation Submission

    NASA Technical Reports Server (NTRS)

    Shirah, Greg

    2008-01-01

    The following describes a computer animation that has been submitted to the ACM/SIGGRAPH 2008 computer graphics conference: 'Towers in the Tempest' clearly communicates recent scientific research into how hurricanes intensify. This intensification can be caused by a phenomenon called a 'hot tower.' For the first time, research meteorologists have run complex atmospheric simulations at a very fine temporal resolution of 3 minutes. Combining this simulation data with satellite observations enables detailed study of 'hot towers.' The science of 'hot towers' is described using: satellite observation data, conceptual illustrations, and a volumetric atmospheric simulation data. The movie starts by showing a 'hot tower' observed by NASA's Tropical Rainfall Measuring Mission (TRMM) spacecraft's three dimensional precipitation radar data of Hurricane Bonnie. Next, the dynamics of a hurricane and the formation of 'hot towers' are briefly explained using conceptual illustrations. Finally, volumetric cloud, wind, and vorticity data from a supercomputer simulation of Hurricane Bonnie are shown using volume techniques such as ray marching.

  17. Opacplot2: Enabling tabulated EoS and opacity compatibility for HEDLP simulations with the FLASH code

    NASA Astrophysics Data System (ADS)

    Laune, Jordan; Tzeferacos, Petros; Feister, Scott; Fatenejad, Milad; Yurchak, Roman; Flocke, Norbert; Weide, Klaus; Lamb, Donald

    2017-10-01

    Thermodynamic and opacity properties of materials are necessary to accurately simulate laser-driven laboratory experiments. Such data are compiled in tabular format since the thermodynamic range that needs to be covered cannot be described with one single theoretical model. Moreover, tabulated data can be made available prior to runtime, reducing both compute cost and code complexity. This approach is employed by the FLASH code. Equation of state (EoS) and opacity data comes in various formats, matrix-layouts, and file-structures. We discuss recent developments on opacplot2, an open-source Python module that manipulates tabulated EoS and opacity data. We present software that builds upon opacplot2 and enables easy-to-use conversion of different table formats into the IONMIX format, the native tabular input used by FLASH. Our work enables FLASH users to take advantage of a wider range of accurate EoS and opacity tables in simulating HELP experiments at the National Laser User Facilities.

  18. Simulating Operation of a Large Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Frederick, Dean K.; DeCastro, Jonathan

    2008-01-01

    The Commercial Modular Aero- Propulsion System Simulation (C-MAPSS) is a computer program for simulating transient operation of a commercial turbofan engine that can generate as much as 90,000 lb (.0.4 MN) of thrust. It includes a power-management system that enables simulation of open- or closed-loop engine operation over a wide range of thrust levels throughout the full range of flight conditions. C-MAPSS provides the user with a set of tools for performing open- and closed-loop transient simulations and comparison of linear and non-linear models throughout its operating envelope, in an easy-to-use graphical environment.

  19. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  20. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-06-01

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  1. Validation of the AVM Blast Computational Modeling and Simulation Tool Set

    DTIC Science & Technology

    2015-08-04

    by-construction" methodology is powerful and would not be possible without high -level design languages to support validation and verification. [1,4...to enable the making of informed design decisions.  Enable rapid exploration of the design trade-space for high -fidelity requirements tradeoffs...live-fire tests, the jump height of the target structure is recorded by using either high speed cameras or a string pot. A simple projectile motion

  2. Bridging the Gap: Linking Simulation and Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krajewski, Paul E.; Carsley, John; Stoudt, Mark R.

    2012-09-01

    The Materials Genome Initiative (MGI) which is a key enabler for the Advanced Manufacturing Partnership, announced in 2011 by U.S. President Barack Obama, was established to accelerate the development and deployment of advanced materials. The MGI is driven by the need to "bridge the gap" between (I) experimental results and computational analysis to enable the rapid development and validation of new mateirals, and (II) the processes required to convert these materials into useable goods.

  3. Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.

    PubMed

    Skoneczny, Szymon

    2015-01-01

    The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.

  4. Biobeam—Multiplexed wave-optical simulations of light-sheet microscopy

    PubMed Central

    Weigert, Martin; Bundschuh, Sebastian T.

    2018-01-01

    Sample-induced image-degradation remains an intricate wave-optical problem in light-sheet microscopy. Here we present biobeam, an open-source software package that enables simulation of operational light-sheet microscopes by combining data from 105–106 multiplexed and GPU-accelerated point-spread-function calculations. The wave-optical nature of these simulations leads to the faithful reproduction of spatially varying aberrations, diffraction artifacts, geometric image distortions, adaptive optics, and emergent wave-optical phenomena, and renders image-formation in light-sheet microscopy computationally tractable. PMID:29652879

  5. System reliability of randomly vibrating structures: Computational modeling and laboratory testing

    NASA Astrophysics Data System (ADS)

    Sundar, V. S.; Ammanagi, S.; Manohar, C. S.

    2015-09-01

    The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.

  6. NASA Advanced Supercomputing Facility Expansion

    NASA Technical Reports Server (NTRS)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  7. Simulation of Hazards and Poses for a Rocker-Bogie Rover

    NASA Technical Reports Server (NTRS)

    Backes, Paul; Norris, Jeffrey; Powell, Mark; Tharp, Gregory

    2004-01-01

    Provisions for specification of hazards faced by a robotic vehicle (rover) equipped with a rocker-bogie suspension, for prediction of collisions between the vehicle and the hazards, and for simulation of poses of the vehicle at selected positions on the terrain have been incorporated into software that simulates the movements of the vehicle on planned paths across the terrain. The software in question is that of the Web Interface for Telescience (WITS), selected aspects of which have been described in a number of prior NASA Tech Briefs articles. To recapitulate: The WITS is a system of computer software that enables scientists, located at geographically dispersed computer terminals connected to the World Wide Web, to command instrumented robotic vehicles (rovers) during exploration of Mars and perhaps eventually of other planets. The WITS also has potential for adaptation to terrestrial use in telerobotics and other applications that involve computer-based remote monitoring, supervision, control, and planning.

  8. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  9. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  10. The Numerical Propulsion System Simulation: A Multidisciplinary Design System for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    1999-01-01

    Advances in computational technology and in physics-based modeling are making large scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze ma or propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of design process and to provide the designer with critical information about the components early in the design process. This paper describes the development of the Numerical Propulsion System Simulation (NPSS), a multidisciplinary system of analysis tools that is focussed on extending the simulation capability from components to the full system. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  11. Computational Fluid Dynamics (CFD) simulations of a Heisenberg Vortex Tube

    NASA Astrophysics Data System (ADS)

    Bunge, Carl; Sitaraman, Hariswaran; Leachman, Jake

    2017-11-01

    A 3D Computational Fluid Dynamics (CFD) simulation of a Heisenberg Vortex Tube (HVT) is performed to estimate cooling potential with cryogenic hydrogen. The main mechanism driving operation of the vortex tube is the use of fluid power for enthalpy streaming in a highly turbulent swirl in a dual-outlet tube. This enthalpy streaming creates a temperature separation between the outer and inner regions of the flow. Use of a catalyst on the peripheral wall of the centrifuge enables endothermic conversion of para-ortho hydrogen to aid primary cooling. A κ- ɛ turbulence model is used with a cryogenic, non-ideal equation of state, and para-orthohydrogen species evolution. The simulations are validated with experiments and strategies for parametric optimization of this device are presented.

  12. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  14. Simulation of an Asynchronous Machine by using a Pseudo Bond Graph

    NASA Astrophysics Data System (ADS)

    Romero, Gregorio; Felez, Jesus; Maroto, Joaquin; Martinez, M. Luisa

    2008-11-01

    For engineers, computer simulation, is a basic tool since it enables them to understand how systems work without actually needing to see them. They can learn how they work in different circumstances and optimize their design with considerably less cost in terms of time and money than if they had to carry out tests on a physical system. However, if computer simulation is to be reliable it is essential for the simulation model to be validated. There is a wide range of commercial brands on the market offering products for electrical domain simulation (SPICE, LabVIEW PSCAD,Dymola, Simulink, Simplorer,...). These are powerful tools, but require the engineer to have a perfect knowledge of the electrical field. This paper shows an alternative methodology to can simulate an asynchronous machine using the multidomain Bond Graph technique and apply it in any program that permit the simulation of models based in this technique; no extraordinary knowledge of this technique and electric field are required to understand the process .

  15. NASA's Software Bank (Cassegrain Feed System)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Scientific-Atlanta had to design a new Cassegrain antenna, they found that the COSMIC program, "Machine Design of Cassegrain Feed System" allowed for computer simulation of the antenna's performance enabling pre-construction changes to be made. Significant cost savings were effected by the program.

  16. Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters

    NASA Astrophysics Data System (ADS)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.

  17. An Interactive Simulation Program for Exploring Computational Models of Auto-Associative Memory.

    PubMed

    Fink, Christian G

    2017-01-01

    While neuroscience students typically learn about activity-dependent plasticity early in their education, they often struggle to conceptually connect modification at the synaptic scale with network-level neuronal dynamics, not to mention with their own everyday experience of recalling a memory. We have developed an interactive simulation program (based on the Hopfield model of auto-associative memory) that enables the user to visualize the connections generated by any pattern of neural activity, as well as to simulate the network dynamics resulting from such connectivity. An accompanying set of student exercises introduces the concepts of pattern completion, pattern separation, and sparse versus distributed neural representations. Results from a conceptual assessment administered before and after students worked through these exercises indicate that the simulation program is a useful pedagogical tool for illustrating fundamental concepts of computational models of memory.

  18. Taxiing, Take-Off, and Landing Simulation of the High Speed Civil Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    1999-01-01

    The aircraft industry jointly with NASA is studying enabling technologies for higher speed, longer range aircraft configurations. Higher speeds, higher temperatures, and aerodynamics are driving these newer aircraft configurations towards long, slender, flexible fuselages. Aircraft response during ground operations, although often overlooked, is a concern due to the increased fuselage flexibility. This paper discusses modeling and simulation of the High Speed Civil Transport aircraft during taxiing, take-off, and landing. Finite element models of the airframe for various configurations are used and combined with nonlinear landing gear models to provide a simulation tool to study responses to different ground input conditions. A commercial computer simulation program is used to numerically integrate the equations of motion and to compute estimates of the responses using an existing runway profile. Results show aircraft responses exceeding safe acceptable human response levels.

  19. Shuttle operations simulation model programmers'/users' manual

    NASA Technical Reports Server (NTRS)

    Porter, D. G.

    1972-01-01

    The prospective user of the shuttle operations simulation (SOS) model is given sufficient information to enable him to perform simulation studies of the space shuttle launch-to-launch operations cycle. The procedures used for modifying the SOS model to meet user requirements are described. The various control card sequences required to execute the SOS model are given. The report is written for users with varying computer simulation experience. A description of the components of the SOS model is included that presents both an explanation of the logic involved in the simulation of the shuttle operations cycle and a description of the routines used to support the actual simulation.

  20. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less

  1. Efficient generation of connectivity in neuronal networks from simulator-independent descriptions

    PubMed Central

    Djurfeldt, Mikael; Davison, Andrew P.; Eppler, Jochen M.

    2014-01-01

    Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed. We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeler's needs. We have used the connection generator interface to connect C++ and Python implementations of the previously described connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modeling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface. PMID:24795620

  2. Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meisner, Robert; McCoy, Michel; Archer, Bill

    2013-09-11

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools.« less

  3. Development of the ARISTOTLE webware for cloud-based rarefied gas flow modeling

    NASA Astrophysics Data System (ADS)

    Deschenes, Timothy R.; Grot, Jonathan; Cline, Jason A.

    2016-11-01

    Rarefied gas dynamics are important for a wide variety of applications. An improvement in the ability of general users to predict these gas flows will enable optimization of current, and discovery of future processes. Despite this potential, most rarefied simulation software is designed by and for experts in the community. This has resulted in low adoption of the methods outside of the immediate RGD community. This paper outlines an ongoing effort to create a rarefied gas dynamics simulation tool that can be used by a general audience. The tool leverages a direct simulation Monte Carlo (DSMC) library that is available to the entire community and a web-based simulation process that will enable all users to take advantage of high performance computing capabilities. First, the DSMC library and simulation architecture are described. Then the DSMC library is used to predict a number of representative transient gas flows that are applicable to the rarefied gas dynamics community. The paper closes with a summary and future direction.

  4. 2002 Computing and Interdisciplinary Systems Office Review and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Gregory; Lopez, Isaac; Veres, Joseph; Lavelle, Thomas; Sehra, Arun; Freeh, Josh; Hah, Chunill

    2003-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with NASA Glenn s Propulsion program, NASA Ames, industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This year s review meeting describes the current status of the NPSS and the Object Oriented Development Kit with specific emphasis on the progress made over the past year on air breathing propulsion applications for aeronautics and space transportation applications. Major accomplishments include the first 3-D simulation of the primary flow path of a large turbofan engine in less than 15 hours, and the formal release of the NPSS Version 1.5 that includes elements of rocket engine systems and a visual based syntax layer. NPSS and the Development Kit are managed by the Computing and Interdisciplinary Systems Office (CISO) at the NASA Glenn Research Center and financially supported in fiscal year 2002 by the Computing, Networking and Information Systems (CNIS) project managed at NASA Ames, the Glenn Aerospace Propulsion and Power Program and the Advanced Space Transportation Program.

  5. FINAL REPORT FOR VERIFICATION OF THE METAL FINISHING FACILITY POLLUTION PREVENTION TOOL (MFFPPT)

    EPA Science Inventory

    The United States Environmental Protection Agency (USEPA) has prepared a computer process simulation package for the metal finishing industry that enables users to predict process outputs based upon process inputs and other operating conditions. This report documents the developm...

  6. Advanced Computational Methods for Thermal Radiative Heat Transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weaponmore » resp onse in fire environments.« less

  7. SimBox: a simulation-based scalable architecture for distributed command and control of spaceport and service constellations

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-09-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  8. Capstone: A Geometry-Centric Platform to Enable Physics-Based Simulation and Design of Systems

    DTIC Science & Technology

    2015-10-05

    foundation for the air-vehicle early design tool DaVinci being developed by CREATETM-AV project to enable development of associative models of air...CREATETM-AV solvers Kestrel [11] and Helios [16,17]. Furthermore, it is the foundation for the CREATETM-AV’s DaVinci [9] tool that provides a... Tools and Environments (CREATETM) program [6] aimed at developing a suite of high- performance physics-based computational tools addressing the needs

  9. GPU-based Green’s function simulations of shear waves generated by an applied acoustic radiation force in elastic and viscoelastic models

    NASA Astrophysics Data System (ADS)

    Yang, Yiqun; Urban, Matthew W.; McGough, Robert J.

    2018-05-01

    Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green’s functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green’s function approach are ideally suited for high-performance GPUs.

  10. Molecular simulations of self-assembling bio-inspired supramolecular systems and their connection to experiments

    PubMed Central

    Patmanidis, Ilias

    2018-01-01

    In bionanotechnology, the field of creating functional materials consisting of bio-inspired molecules, the function and shape of a nanostructure only appear through the assembly of many small molecules together. The large number of building blocks required to define a nanostructure combined with the many degrees of freedom in packing small molecules has long precluded molecular simulations, but recent advances in computational hardware as well as software have made classical simulations available to this strongly expanding field. Here, we review the state of the art in simulations of self-assembling bio-inspired supramolecular systems. We will first discuss progress in force fields, simulation protocols and enhanced sampling techniques using recent examples. Secondly, we will focus on efforts to enable the comparison of experimentally accessible observables and computational results. Experimental quantities that can be measured by microscopy, spectroscopy and scattering can be linked to simulation output either directly or indirectly, via quantum mechanical or semi-empirical techniques. Overall, we aim to provide an overview of the various computational approaches to understand not only the molecular architecture of nanostructures, but also the mechanism of their formation. PMID:29688238

  11. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  12. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  13. Fluid-structure interaction modeling of wind turbines: simulating the full machine

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Chen; Bazilevs, Yuri

    2012-12-01

    In this paper we present our aerodynamics and fluid-structure interaction (FSI) computational techniques that enable dynamic, fully coupled, 3D FSI simulation of wind turbines at full scale, and in the presence of the nacelle and tower (i.e., simulation of the "full machine"). For the interaction of wind and flexible blades we employ a nonmatching interface discretization approach, where the aerodynamics is computed using a low-order finite-element-based ALE-VMS technique, while the rotor blades are modeled as thin composite shells discretized using NURBS-based isogeometric analysis (IGA). We find that coupling FEM and IGA in this manner gives a good combination of efficiency, accuracy, and flexibility of the computational procedures for wind turbine FSI. The interaction between the rotor and tower is handled using a non-overlapping sliding-interface approach, where both moving- and stationary-domain formulations of aerodynamics are employed. At the fluid-structure and sliding interfaces, the kinematic and traction continuity is enforced weakly, which is a key ingredient of the proposed numerical methodology. We present several simulations of a three-blade 5~MW wind turbine, with and without the tower. We find that, in the case of no tower, the presence of the sliding interface has no effect on the prediction of aerodynamic loads on the rotor. From this we conclude that weak enforcement of the kinematics gives just as accurate results as the strong enforcement, and thus enables the simulation of rotor-tower interaction (as well as other applications involving mechanical components in relative motion). We also find that the blade passing the tower produces a 10-12 % drop (per blade) in the aerodynamic torque. We feel this finding may be important when it comes to the fatigue-life analysis and prediction for wind turbine blades.

  14. Biophysical Discovery through the Lens of a Computational Microscope

    NASA Astrophysics Data System (ADS)

    Amaro, Rommie

    With exascale computing power on the horizon, improvements in the underlying algorithms and available structural experimental data are enabling new paradigms for chemical discovery. My work has provided key insights for the systematic incorporation of structural information resulting from state-of-the-art biophysical simulations into protocols for inhibitor and drug discovery. We have shown that many disease targets have druggable pockets that are otherwise ``hidden'' in high resolution x-ray structures, and that this is a common theme across a wide range of targets in different disease areas. We continue to push the limits of computational biophysical modeling by expanding the time and length scales accessible to molecular simulation. My sights are set on, ultimately, the development of detailed physical models of cells, as the fundamental unit of life, and two recent achievements highlight our efforts in this arena. First is the development of a molecular and Brownian dynamics multi-scale modeling framework, which allows us to investigate drug binding kinetics in addition to thermodynamics. In parallel, we have made significant progress developing new tools to extend molecular structure to cellular environments. Collectively, these achievements are enabling the investigation of the chemical and biophysical nature of cells at unprecedented scales.

  15. Distributed Observer Network

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.

  16. 3D simulation of friction stir welding based on movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Eremina, Galina M.

    2017-12-01

    The paper is devoted to a 3D computer simulation of the peculiarities of material flow taking place in friction stir welding (FSW). The simulation was performed by the movable cellular automaton (MCA) method, which is a representative of particle methods in mechanics. Commonly, the flow of material in FSW is simulated based on computational fluid mechanics, assuming the material as continuum and ignoring its structure. The MCA method considers a material as an ensemble of bonded particles. The rupture of interparticle bonds and the formation of new bonds enable simulations of crack nucleation and healing as well as mas mixing and microwelding. The simulation results showed that using pins of simple shape (cylinder, cone, and pyramid) without a shoulder results in small displacements of plasticized material in workpiece thickness directions. Nevertheless, the optimal ratio of longitudinal velocity to rotational speed makes it possible to transport the welded material around the pin several times and to produce a joint of good quality.

  17. PyFly: A fast, portable aerodynamics simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.

    Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less

  18. PyFly: A fast, portable aerodynamics simulator

    DOE PAGES

    Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.; ...

    2018-03-14

    Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less

  19. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  20. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  1. Interoperability of Neuroscience Modeling Software

    PubMed Central

    Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik

    2009-01-01

    Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374

  2. Geospace ionosphere research with a MF/HF radio instrument on a cubesat

    NASA Astrophysics Data System (ADS)

    Kallio, E. J.; Aikio, A. T.; Alho, M.; Fontell, M.; van Gijlswijk, R.; Kauristie, K.; Kestilä, A.; Koskimaa, P.; Makela, J. S.; Mäkelä, M.; Turunen, E.; Vanhamäki, H.

    2016-12-01

    Modern technology provides new possibilities to study geospace and its ionosphere, using spacecraft and and computer simulations. A type of nanosatellites, CubeSats, provide a cost effective possibility to provide in-situ measurements in the ionosphere. Moreover, combined CubeSat observations with ground-based observations gives a new view on auroras and associated electromagnetic phenomena. Especially joint and active CubeSat - ground based observation campaigns enable the possibility of studying the 3D structure of the ionosphere. Furthermore using several CubeSats to form satellite constellations enables much higher temporal resolution. At the same time, increasing computation capacity has made it possible to perform simulations where properties of the ionosphere, such as propagation of the electromagnetic waves in the medium frequency, MF (0.3-3 MHz) and high frequency, HF (3-30 MHz), ranges is based on a 3D ionospheric model and on first-principles modelling. Electromagnetic waves at those frequencies are strongly affected by ionospheric electrons and, consequently, those frequencies can be used for studying the plasma. On the other hand, even if the ionosphere originally enables long-range telecommunication at MF and HF frequencies, the frequent occurrence of spatiotemporal variations in the ionosphere disturbs communication channels, especially at high latitudes. Therefore, study of the MF and HF waves in the ionosphere has both a strong science and technology interests. We present computational simulation results and measuring principles and techniques to investigate the arctic ionosphere by a polar orbiting CubeSat whose novel AM radio instrument measures HF and MF waves. The cubesat, which contains also a white light aurora camera, is planned to be launched in 2017 (http://www.suomi100satelliitti.fi/eng). We have modelled the propagation of the radio waves, both ground generated man-made waves and space formed space weather related waves, through the 3D arctic ionosphere with (1) a new 3D ray tracing model and (2) a new 3D full kinetic electromagnetic simulation. These simulations are used to analyse the origin of the radio waves observed by the MH/HF radio instrument and, consequently, to derive information about the 3D ionosphere and its spatial and temporal variations.

  3. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  4. Extending the Capabilities of Closed-loop Distributed Engine Control Simulations Using LAN Communication

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia Mae; Culley, Dennis E.

    2014-01-01

    Distributed Engine Control (DEC) is an enabling technology that has the potential to advance the state-of-the-art in gas turbine engine control. To analyze the capabilities that DEC offers, a Hardware-In-the-Loop (HIL) test bed is being developed at NASA Glenn Research Center. This test bed will support a systems-level analysis of control capabilities in closed-loop engine simulations. The structure of the HIL emulates a virtual test cell by implementing the operator functions, control system, and engine on three separate computers. This implementation increases the flexibility and extensibility of the HIL. Here, a method is discussed for implementing these interfaces by connecting the three platforms over a dedicated Local Area Network (LAN). This approach is verified using the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k), which is typically implemented on one computer. There are marginal differences between the results from simulation of the typical and the three-computer implementation. Additional analysis of the LAN network, including characterization of network load, packet drop, and latency, is presented. The three-computer setup supports the incorporation of complex control models and proprietary engine models into the HIL framework.

  5. SiMon: Simulation Monitor for Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming

    2017-09-01

    Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.

  6. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    2003-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT).

  7. New Features in the Computational Infrastructure for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Smith, M. S.; Lingerfelt, E. J.; Scott, J. P.; Hix, W. R.; Nesaraja, C. D.; Koura, H.; Roberts, L. F.

    2006-04-01

    The Computational Infrastructure for Nuclear Astrophysics is a suite of computer codes online at nucastrodata.org that streamlines the incorporation of recent nuclear physics results into astrophysical simulations. The freely-available, cross- platform suite enables users to upload cross sections and s-factors, convert them into reaction rates, parameterize the rates, store the rates in customizable libraries, setup and run custom post-processing element synthesis calculations, and visualize the results. New features include the ability for users to comment on rates or libraries using an email-type interface, a nuclear mass model evaluator, enhanced techniques for rate parameterization, better treatment of rate inverses, and creation and exporting of custom animations of simulation results. We also have online animations of r- process, rp-process, and neutrino-p process element synthesis occurring in stellar explosions.

  8. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  9. Minimum-complexity helicopter simulation math model

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  10. Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David

    1995-01-01

    Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.

  11. Feasibility study for a numerical aerodynamic simulation facility. Volume 3: FMP language specification/user manual

    NASA Technical Reports Server (NTRS)

    Kenner, B. G.; Lincoln, N. R.

    1979-01-01

    The manual is intended to show the revisions and additions to the current STAR FORTRAN. The changes are made to incorporate an FMP (Flow Model Processor) for use in the Numerical Aerodynamic Simulation Facility (NASF) for the purpose of simulating fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The FORTRAN programming language for the STAR-100 computer contains both CDC and unique STAR extensions to the standard FORTRAN. Several of the STAR FORTRAN extensions to standard FOR-TRAN allow the FORTRAN user to exploit the vector processing capabilities of the STAR computer. In STAR FORTRAN, vectors can be expressed with an explicit notation, functions are provided that return vector results, and special call statements enable access to any machine instruction.

  12. Technically Speaking: Why Should You Use Virtual Grower?

    USDA-ARS?s Scientific Manuscript database

    Virtual Grower is a free, easy-to-use software program that every grower who heats their greenhouse should install on their computer. The program enables growers to simulate their own greenhouse and predict how changes or investments could impact the growing environment, heating costs, and crop res...

  13. Mars Science Laboratory Workstation Test Set

    NASA Technical Reports Server (NTRS)

    Henriquez, David A.; Canham, Timothy K.; Chang, Johnny T.; Villaume, Nathaniel

    2009-01-01

    The Mars Science Laboratory developed the Workstation TestSet (WSTS) is a computer program that enables flight software development on virtual MSL avionics. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC.

  14. Simulation Based Exploration of Critical Zone Dynamics in Intensively Managed Landscapes

    NASA Astrophysics Data System (ADS)

    Kumar, P.

    2017-12-01

    The advent of high-resolution measurements of topographic and (vertical) vegetation features using areal LiDAR are enabling us to resolve micro-scale ( 1m) landscape structural characteristics over large areas. Availability of hyperspectral measurements is further augmenting these LiDAR data by enabling the biogeochemical characterization of vegetation and soils at unprecedented spatial resolutions ( 1-10m). Such data have opened up novel opportunities for modeling Critical Zone processes and exploring questions that were not possible before. We show how an integrated 3-D model at 1m grid resolution can enable us to resolve micro-topographic and ecological dynamics and their control on hydrologic and biogeochemical processes over large areas. We address the computational challenge of such detailed modeling by exploiting hybrid CPU and GPU computing technologies. We show results of moisture, biogeochemical, and vegetation dynamics from studies in the Critical Zone Observatory for Intensively managed Landscapes (IMLCZO) in the Midwestern United States.

  15. Terascale direct numerical simulations of turbulent combustion using S3D

    NASA Astrophysics Data System (ADS)

    Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.

    2009-01-01

    Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.

  16. The COPD Knowledge Base: enabling data analysis and computational simulation in translational COPD research.

    PubMed

    Cano, Isaac; Tényi, Ákos; Schueller, Christine; Wolff, Martin; Huertas Migueláñez, M Mercedes; Gomez-Cabrero, David; Antczak, Philipp; Roca, Josep; Cascante, Marta; Falciani, Francesco; Maier, Dieter

    2014-11-28

    Previously we generated a chronic obstructive pulmonary disease (COPD) specific knowledge base (http://www.copdknowledgebase.eu) from clinical and experimental data, text-mining results and public databases. This knowledge base allowed the retrieval of specific molecular networks together with integrated clinical and experimental data. The COPDKB has now been extended to integrate over 40 public data sources on functional interaction (e.g. signal transduction, transcriptional regulation, protein-protein interaction, gene-disease association). In addition we integrated COPD-specific expression and co-morbidity networks connecting over 6 000 genes/proteins with physiological parameters and disease states. Three mathematical models describing different aspects of systemic effects of COPD were connected to clinical and experimental data. We have completely redesigned the technical architecture of the user interface and now provide html and web browser-based access and form-based searches. A network search enables the use of interconnecting information and the generation of disease-specific sub-networks from general knowledge. Integration with the Synergy-COPD Simulation Environment enables multi-scale integrated simulation of individual computational models while integration with a Clinical Decision Support System allows delivery into clinical practice. The COPD Knowledge Base is the only publicly available knowledge resource dedicated to COPD and combining genetic information with molecular, physiological and clinical data as well as mathematical modelling. Its integrated analysis functions provide overviews about clinical trends and connections while its semantically mapped content enables complex analysis approaches. We plan to further extend the COPDKB by offering it as a repository to publish and semantically integrate data from relevant clinical trials. The COPDKB is freely available after registration at http://www.copdknowledgebase.eu.

  17. Simulating smokers' acceptance of modifications in a cessation program.

    PubMed Central

    Spoth, R

    1992-01-01

    Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed. PMID:1738813

  18. Simulating smokers' acceptance of modifications in a cessation program.

    PubMed

    Spoth, R

    1992-01-01

    Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed.

  19. NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report

    NASA Technical Reports Server (NTRS)

    Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ

    2013-01-01

    The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities

  20. Solving probability reasoning based on DNA strand displacement and probability modules.

    PubMed

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Improved Interactive Medical-Imaging System

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Twombly, Ian A.; Senger, Steven

    2003-01-01

    An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.

  2. An Integrated Development Environment for Adiabatic Quantum Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Bennink, Ryan S

    2014-01-01

    Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less

  3. Rapid Technology Assessment via Unified Deployment of Global Optical and Virtual Diagnostics

    NASA Technical Reports Server (NTRS)

    Jordan, Jeffrey D.; Watkins, A. Neal; Fleming, Gary A.; Leighty, Bradley D.; Schwartz, Richard J.; Ingram, JoAnne L.; Grinstead, Keith D., Jr.; Oglesby, Donald M.; Tyler, Charles

    2003-01-01

    This paper discusses recent developments in rapid technology assessment resulting from an active collaboration between researchers at the Air Force Research Laboratory (AFRL) at Wright Patterson Air Force Base (WPAFB) and the NASA Langley Research Center (LaRC). This program targets the unified development and deployment of global measurement technologies coupled with a virtual diagnostic interface to enable the comparative evaluation of experimental and computational results. Continuing efforts focus on the development of seamless data translation methods to enable integration of data sets of disparate file format in a common platform. Results from a successful low-speed wind tunnel test at WPAFB in which global surface pressure distributions were acquired simultaneously with model deformation and geometry measurements are discussed and comparatively evaluated with numerical simulations. Intensity- and lifetime-based pressure-sensitive paint (PSP) and projection moire interferometry (PMI) results are presented within the context of rapid technology assessment to enable simulation-based R&D.

  4. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  5. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  6. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  7. Simulation of dilute polymeric fluids in a three-dimensional contraction using a multiscale FENE model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griebel, M., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de; Rüttgers, A., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de

    The multiscale FENE model is applied to a 3D square-square contraction flow problem. For this purpose, the stochastic Brownian configuration field method (BCF) has been coupled with our fully parallelized three-dimensional Navier-Stokes solver NaSt3DGPF. The robustness of the BCF method enables the numerical simulation of high Deborah number flows for which most macroscopic methods suffer from stability issues. The results of our simulations are compared with that of experimental measurements from literature and show a very good agreement. In particular, flow phenomena such as a strong vortex enhancement, streamline divergence and a flow inversion for highly elastic flows are reproduced.more » Due to their computational complexity, our simulations require massively parallel computations. Using a domain decomposition approach with MPI, the implementation achieves excellent scale-up results for up to 128 processors.« less

  8. COMPUTATIONAL MITRAL VALVE EVALUATION AND POTENTIAL CLINICAL APPLICATIONS

    PubMed Central

    Chandran, Krishnan B.; Kim, Hyunggun

    2014-01-01

    The mitral valve (MV) apparatus consists of the two asymmetric leaflets, the saddle-shaped annulus, the chordae tendineae, and the papillary muscles. MV function over the cardiac cycle involves complex interaction between the MV apparatus components for efficient blood circulation. Common diseases of the MV include valvular stenosis, regurgitation, and prolapse. MV repair is the most popular and most reliable surgical treatment for early MV pathology. One of the unsolved problems in MV repair is to predict the optimal repair strategy for each patient. Although experimental studies have provided valuable information to improve repair techniques, computational simulations are increasingly playing an important role in understanding the complex MV dynamics, particularly with the availability of patient-specific real-time imaging modalities. This work presents a review of computational simulation studies of MV function employing finite element (FE) structural analysis and fluid-structure interaction (FSI) approach reported in the literature to date. More recent studies towards potential applications of computational simulation approaches in the assessment of valvular repair techniques and potential pre-surgical planning of repair strategies are also discussed. It is anticipated that further advancements in computational techniques combined with the next generations of clinical imaging modalities will enable physiologically more realistic simulations. Such advancement in imaging and computation will allow for patient-specific, disease-specific, and case-specific MV evaluation and virtual prediction of MV repair. PMID:25134487

  9. Analyzing Spacecraft Telecommunication Systems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric

    2004-01-01

    Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.

  10. Computational Modeling of Ultrafast Pulse Propagation in Nonlinear Optical Materials

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.; Agrawal, Govind P.; Kwak, Dochan (Technical Monitor)

    1996-01-01

    There is an emerging technology of photonic (or optoelectronic) integrated circuits (PICs or OEICs). In PICs, optical and electronic components are grown together on the same chip. rib build such devices and subsystems, one needs to model the entire chip. Accurate computer modeling of electromagnetic wave propagation in semiconductors is necessary for the successful development of PICs. More specifically, these computer codes would enable the modeling of such devices, including their subsystems, such as semiconductor lasers and semiconductor amplifiers in which there is femtosecond pulse propagation. Here, the computer simulations are made by solving the full vector, nonlinear, Maxwell's equations, coupled with the semiconductor Bloch equations, without any approximations. The carrier is retained in the description of the optical pulse, (i.e. the envelope approximation is not made in the Maxwell's equations), and the rotating wave approximation is not made in the Bloch equations. These coupled equations are solved to simulate the propagation of femtosecond optical pulses in semiconductor materials. The simulations describe the dynamics of the optical pulses, as well as the interband and intraband.

  11. TopoDrive and ParticleFlow--Two Computer Models for Simulation and Visualization of Ground-Water Flow and Transport of Fluid Particles in Two Dimensions

    USGS Publications Warehouse

    Hsieh, Paul A.

    2001-01-01

    This report serves as a user?s guide for two computer models: TopoDrive and ParticleFlow. These two-dimensional models are designed to simulate two ground-water processes: topography-driven flow and advective transport of fluid particles. To simulate topography-driven flow, the user may specify the shape of the water table, which bounds the top of the vertical flow section. To simulate transport of fluid particles, the model domain is a rectangle with overall flow from left to right. In both cases, the flow is under steady state, and the distribution of hydraulic conductivity may be specified by the user. The models compute hydraulic head, ground-water flow paths, and the movement of fluid particles. An interactive visual interface enables the user to easily and quickly explore model behavior, and thereby better understand ground-water flow processes. In this regard, TopoDrive and ParticleFlow are not intended to be comprehensive modeling tools, but are designed for modeling at the exploratory or conceptual level, for visual demonstration, and for educational purposes.

  12. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  13. Acid-Base Disorders--A Computer Simulation.

    ERIC Educational Resources Information Center

    Maude, David L.

    1985-01-01

    Describes and lists a program for Apple Pascal Version 1.1 which investigates the behavior of the bicarbonate-carbon dioxide buffer system in acid-base disorders. Designed specifically for the preclinical medical student, the program has proven easy to use and enables students to use blood gas parameters to arrive at diagnoses. (DH)

  14. A Computer Program for the Generation of ARIMA Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Noles, Keith O.

    1977-01-01

    The autoregressive integrated moving averages model (ARIMA) has been applied to time series data in psychological and educational research. A program is described that generates ARIMA data of a known order. The program enables researchers to explore statistical properties of ARIMA data and simulate systems producing time dependent observations.…

  15. Techniques of Australian forest planning

    Treesearch

    Australian Forestry Council

    1978-01-01

    Computer modeling has been extensively adopted for Australian forest planning over the last ten years. It has been confined almost entirely to the plantations of fast-growing species for which adequate inventory, growth, and experimental data are available. Stand simulation models have replaced conventional yield tables and enabled a wide range of alternative...

  16. Cyber Strategic Inquiry: Enabling Change through a Strategic Simulation and Megacommunity Concept

    DTIC Science & Technology

    2009-02-01

    malicious software embedded in thumb drives and CDs that thwarted protections, such as antivirus software , on computers. In the scenario, these...Executives for National Security • The Carlyle Group • Cassat Corporation • Cisco Systems, Inc. • Cyveillance • General Dynamics • General Motors

  17. Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive and time consuming. One of the main contributors to the high cost and lengthy time is the need to perform many large-scale hardware tests and the inability to integrate all appropriate subsystems early in the design process. The NASA Glenn Research Center is developing the technologies required to enable simulations of full aerospace propulsion systems in sufficient detail to resolve critical design issues early in the design process before hardware is built. This concept, called the Numerical Propulsion System Simulation (NPSS), is focused on the integration of multiple disciplines such as aerodynamics, structures and heat transfer with computing and communication technologies to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS, as illustrated, is to be a "numerical test cell" that enables full engine simulation overnight on cost-effective computing platforms. There are several key elements within NPSS that are required to achieve this capability: 1) clear data interfaces through the development and/or use of data exchange standards, 2) modular and flexible program construction through the use of object-oriented programming, 3) integrated multiple fidelity analysis (zooming) techniques that capture the appropriate physics at the appropriate fidelity for the engine systems, 4) multidisciplinary coupling techniques and finally 5) high performance parallel and distributed computing. The current state of development in these five area focuses on air breathing gas turbine engines and is reported in this paper. However, many of the technologies are generic and can be readily applied to rocket based systems and combined cycles currently being considered for low-cost access-to-space applications. Recent accomplishments include: (1) the development of an industry-standard engine cycle analysis program and plug 'n play architecture, called NPSS Version 1, (2) A full engine simulation that combines a 3D low-pressure subsystem with a 0D high pressure core simulation. This demonstrates the ability to integrate analyses at different levels of detail and to aerodynamically couple components, the fan/booster and low-pressure turbine, through a 3D computational fluid dynamics simulation. (3) Simulation of all of the turbomachinery in a modern turbofan engine on parallel computing platform for rapid and cost-effective execution. This capability can also be used to generate full compressor map, requiring both design and off-design simulation. (4) Three levels of coupling characterize the multidisciplinary analysis under NPSS: loosely coupled, process coupled and tightly coupled. The loosely coupled and process coupled approaches require a common geometry definition to link CAD to analysis tools. The tightly coupled approach is currently validating the use of arbitrary Lagrangian/Eulerian formulation for rotating turbomachinery. The validation includes both centrifugal and axial compression systems. The results of the validation will be reported in the paper. (5) The demonstration of significant computing cost/performance reduction for turbine engine applications using PC clusters. The NPSS Project is supported under the NASA High Performance Computing and Communications Program.

  18. An interactive physics-based unmanned ground vehicle simulator leveraging open source gaming technology: progress in the development and application of the virtual autonomous navigation environment (VANE) desktop

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Crawford, Justin; Toschlog, Matthew; Iagnemma, Karl D.; Kewlani, Guarav; Cummins, Christopher L.; Jones, Randolph A.; Horner, David A.

    2009-05-01

    It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles (UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing (HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE HPC research is a real-time desktop simulation application under development by the authors that provides a portal into the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations. ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf (COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several initial applications of the system.

  19. Data management and analysis for the Earth System Grid

    NASA Astrophysics Data System (ADS)

    Williams, D. N.; Ananthakrishnan, R.; Bernholdt, D. E.; Bharathi, S.; Brown, D.; Chen, M.; Chervenak, A. L.; Cinquini, L.; Drach, R.; Foster, I. T.; Fox, P.; Hankin, S.; Henson, V. E.; Jones, P.; Middleton, D. E.; Schwidder, J.; Schweitzer, R.; Schuler, R.; Shoshani, A.; Siebenlist, F.; Sim, A.; Strand, W. G.; Wilhelmi, N.; Su, M.

    2008-07-01

    The international climate community is expected to generate hundreds of petabytes of simulation data within the next five to seven years. This data must be accessed and analyzed by thousands of analysts worldwide in order to provide accurate and timely estimates of the likely impact of climate change on physical, biological, and human systems. Climate change is thus not only a scientific challenge of the first order but also a major technological challenge. In order to address this technological challenge, the Earth System Grid Center for Enabling Technologies (ESG-CET) has been established within the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC)-2 program, with support from the offices of Advanced Scientific Computing Research and Biological and Environmental Research. ESG-CET's mission is to provide climate researchers worldwide with access to the data, information, models, analysis tools, and computational capabilities required to make sense of enormous climate simulation datasets. Its specific goals are to (1) make data more useful to climate researchers by developing Grid technology that enhances data usability; (2) meet specific distributed database, data access, and data movement needs of national and international climate projects; (3) provide a universal and secure web-based data access portal for broad multi-model data collections; and (4) provide a wide-range of Grid-enabled climate data analysis tools and diagnostic methods to international climate centers and U.S. government agencies. Building on the successes of the previous Earth System Grid (ESG) project, which has enabled thousands of researchers to access tens of terabytes of data from a small number of ESG sites, ESG-CET is working to integrate a far larger number of distributed data providers, high-bandwidth wide-area networks, and remote computers in a highly collaborative problem-solving environment.

  20. A new climate modeling framework for convection-resolving simulation at continental scale

    NASA Astrophysics Data System (ADS)

    Charpilloz, Christophe; di Girolamo, Salvatore; Arteaga, Andrea; Fuhrer, Oliver; Hoefler, Torsten; Schulthess, Thomas; Schär, Christoph

    2017-04-01

    Major uncertainties remain in our understanding of the processes that govern the water cycle in a changing climate and their representation in weather and climate models. Of particular concern are heavy precipitation events of convective origin (thunderstorms and rain showers). The aim of the crCLIM project [1] is to propose a new climate modeling framework that alleviates the I/O-bottleneck in large-scale, convection-resolving climate simulations and thus to enable new analysis techniques for climate scientists. Due to the large computational costs, convection-resolving simulations are currently restricted to small computational domains or very short time scales, unless the largest available supercomputers system such as hybrid CPU-GPU architectures are used [3]. Hence, the COSMO model has been adapted to run on these architectures for research and production purposes [2]. However, the amount of generated data also increases and storing this data becomes infeasible making the analysis of simulations results impractical. To circumvent this problem and enable high-resolution models in climate we propose a data-virtualization layer (DVL) that re-runs simulations on demand and transparently manages the data for the analysis, that means we trade off computational effort (time) for storage (space). This approach also requires a bit-reproducible version of the COSMO model that produces identical results on different architectures (CPUs and GPUs) [4] that will be coupled with a performance model in order enable optimal re-runs depending on requirements of the re-run and available resources. In this contribution, we discuss the strategy to develop the DVL, a first performance model, the challenge of bit-reproducibility and the first results of the crCLIM project. [1] http://www.c2sm.ethz.ch/research/crCLIM.html [2] O. Fuhrer, C. Osuna, X. Lapillonne, T. Gysi, M. Bianco, and T. Schulthess. "Towards gpu-accelerated operational weather forecasting." In The GPU Technology Conference, GTC. 2013. [3] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, and C. Schär. "Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19." Geoscientific Model Development 9, no. 9 (2016): 3393. [4] A. Arteaga, O. Fuhrer, and T. Hoefler. "Designing bit-reproducible portable high-performance applications." In Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp. 1235-1244. IEEE, 2014.

  1. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  2. Delivering Insight The History of the Accelerated Strategic Computing Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larzelere II, A R

    2007-01-03

    The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of the scientific method, on a par with theory and experiment. ASCI did not invent the idea, nor was it alone in bringing it to fruition. But ASCI provided the wherewithal - hardware, software, environment, funding, and, most of all, the urgency - that made it happen. On October 1, 2005, the Initiative completed its tenth year of funding. The advances made by ASCI over its first decade are truly incredible. Lawrence Livermore, Los Alamos, and Sandia National Laboratories,more » along with leadership provided by the Department of Energy's Defense Programs Headquarters, fundamentally changed computational simulation and how it is used to enable scientific insight. To do this, astounding advances were made in simulation applications, computing platforms, and user environments. ASCI dramatically changed existing - and forged new - relationships, both among the Laboratories and with outside partners. By its tenth anniversary, despite daunting challenges, ASCI had accomplished all of the major goals set at its beginning. The history of ASCI is about the vision, leadership, endurance, and partnerships that made these advances possible.« less

  3. Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.

  4. 3D printing in chemical engineering and catalytic technology: structured catalysts, mixers and reactors.

    PubMed

    Parra-Cabrera, Cesar; Achille, Clement; Kuhn, Simon; Ameloot, Rob

    2018-01-02

    Computer-aided fabrication technologies combined with simulation and data processing approaches are changing our way of manufacturing and designing functional objects. Also in the field of catalytic technology and chemical engineering the impact of additive manufacturing, also referred to as 3D printing, is steadily increasing thanks to a rapidly decreasing equipment threshold. Although still in an early stage, the rapid and seamless transition between digital data and physical objects enabled by these fabrication tools will benefit both research and manufacture of reactors and structured catalysts. Additive manufacturing closes the gap between theory and experiment, by enabling accurate fabrication of geometries optimized through computational fluid dynamics and the experimental evaluation of their properties. This review highlights the research using 3D printing and computational modeling as digital tools for the design and fabrication of reactors and structured catalysts. The goal of this contribution is to stimulate interactions at the crossroads of chemistry and materials science on the one hand and digital fabrication and computational modeling on the other.

  5. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.

    2002-01-01

    The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.

  6. Runtime visualization of the human arterial tree.

    PubMed

    Insley, Joseph A; Papka, Michael E; Dong, Suchuan; Karniadakis, George; Karonis, Nicholas T

    2007-01-01

    Large-scale simulation codes typically execute for extended periods of time and often on distributed computational resources. Because these simulations can run for hours, or even days, scientists like to get feedback about the state of the computation and the validity of its results as it runs. It is also important that these capabilities be made available with little impact on the performance and stability of the simulation. Visualizing and exploring data in the early stages of the simulation can help scientists identify problems early, potentially avoiding a situation where a simulation runs for several days, only to discover that an error with an input parameter caused both time and resources to be wasted. We describe an application that aids in the monitoring and analysis of a simulation of the human arterial tree. The application provides researchers with high-level feedback about the state of the ongoing simulation and enables them to investigate particular areas of interest in greater detail. The application also offers monitoring information about the amount of data produced and data transfer performance among the various components of the application.

  7. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  8. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  9. Freud: a software suite for high-throughput simulation analysis

    NASA Astrophysics Data System (ADS)

    Harper, Eric; Spellings, Matthew; Anderson, Joshua; Glotzer, Sharon

    Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present Freud, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C + + analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials.

  10. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less

  11. Flexible Inhibitor Fluid-Structure Interaction Simulation in RSRM.

    NASA Astrophysics Data System (ADS)

    Wasistho, Bono

    2005-11-01

    We employ our tightly coupled fluid/structure/combustion simulation code 'Rocstar-3' for solid propellant rocket motors to study 3D flows past rigid and flexible inhibitors in the Reusable Solid Rocket Motor (RSRM). We perform high resolution simulations of a section of the rocket near the center joint slot at 100 seconds after ignition, using inflow conditions based on less detailed 3D simulations of the full RSRM. Our simulations include both inviscid and turbulent flows (using LES dynamic subgrid-scale model), and explore the interaction between the inhibitor and the resulting fluid flow. The response of the solid components is computed by an implicit finite element solver. The internal mesh motion scheme in our block-structured fluid solver enables our code to handle significant changes in geometry. We compute turbulent statistics and determine the compound instabilities originated from the natural hydrodynamic instabilities and the inhibitor motion. The ultimate goal is to studdy the effect of inhibitor flexing on the turbulent field.

  12. Closed Loop Interactions between Spiking Neural Network and Robotic Simulators Based on MUSIC and ROS.

    PubMed

    Weidel, Philipp; Djurfeldt, Mikael; Duarte, Renato C; Morrison, Abigail

    2016-01-01

    In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning.

  13. Closed Loop Interactions between Spiking Neural Network and Robotic Simulators Based on MUSIC and ROS

    PubMed Central

    Weidel, Philipp; Djurfeldt, Mikael; Duarte, Renato C.; Morrison, Abigail

    2016-01-01

    In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning. PMID:27536234

  14. Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.

  15. The ReaxFF reactive force-field: Development, applications, and future directions

    DOE PAGES

    Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...

    2016-03-04

    The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less

  16. Programs for Testing Processor-in-Memory Computing Systems

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  17. A Computational Framework for Bioimaging Simulation.

    PubMed

    Watabe, Masaki; Arjunan, Satya N V; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units.

  18. FACE-IT. A Science Gateway for Food Security Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montella, Raffaele; Kelly, David; Xiong, Wei

    Progress in sustainability science is hindered by challenges in creating and managing complex data acquisition, processing, simulation, post-processing, and intercomparison pipelines. To address these challenges, we developed the Framework to Advance Climate, Economic, and Impact Investigations with Information Technology (FACE-IT) for crop and climate impact assessments. This integrated data processing and simulation framework enables data ingest from geospatial archives; data regridding, aggregation, and other processing prior to simulation; large-scale climate impact simulations with agricultural and other models, leveraging high-performance and cloud computing; and post-processing to produce aggregated yields and ensemble variables needed for statistics, for model intercomparison, and to connectmore » biophysical models to global and regional economic models. FACE-IT leverages the capabilities of the Globus Galaxies platform to enable the capture of workflows and outputs in well-defined, reusable, and comparable forms. We describe FACE-IT and applications within the Agricultural Model Intercomparison and Improvement Project and the Center for Robust Decision-making on Climate and Energy Policy.« less

  19. Terascale Cluster for Advanced Turbulent Combustion Simulations

    DTIC Science & Technology

    2008-07-25

    the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and

  20. Partnership For Edge Physics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Manish

    In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extendedmore » framework that should provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.« less

  1. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. MiRTE: Mixed Reality Triage and Evacuation game for Mass Casualty information systems design, testing and training.

    PubMed

    Yu, Xunyi; Ganz, Aura

    2011-01-01

    In this paper we introduce a Mixed Reality Triage and Evacuation game, MiRTE, that is used in the development, testing and training of Mass Casualty Incident (MCI) information systems for first responders. Using the Source game engine from Valve software, MiRTE creates immersive virtual environments to simulate various incident scenarios, and enables interactions between multiple players/first responders. What distinguishes it from a pure computer simulation game is that it can interface with external mass casualty incident management systems, such as DIORAMA. The game will enable system developers to specify technical requirements of underlying technology, and test different alternatives of design. After the information system hardware and software are completed, the game can simulate various algorithms such as localization technologies, and interface with an actual user interface on PCs and Smartphones. We implemented and tested the game with the DIORAMA system.

  3. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  4. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  5. SU-F-J-178: A Computer Simulation Model Observer for Task-Based Image Quality Assessment in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Mutic, S; Anastasio, M

    Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less

  6. QCAD simulation and optimization of semiconductor double quantum dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Erik; Gao, Xujiao; Kalashnikova, Irina

    2013-12-01

    We present the Quantum Computer Aided Design (QCAD) simulator that targets modeling quantum devices, particularly silicon double quantum dots (DQDs) developed for quantum qubits. The simulator has three di erentiating features: (i) its core contains nonlinear Poisson, e ective mass Schrodinger, and Con guration Interaction solvers that have massively parallel capability for high simulation throughput, and can be run individually or combined self-consistently for 1D/2D/3D quantum devices; (ii) the core solvers show superior convergence even at near-zero-Kelvin temperatures, which is critical for modeling quantum computing devices; (iii) it couples with an optimization engine Dakota that enables optimization of gate voltagesmore » in DQDs for multiple desired targets. The Poisson solver includes Maxwell- Boltzmann and Fermi-Dirac statistics, supports Dirichlet, Neumann, interface charge, and Robin boundary conditions, and includes the e ect of dopant incomplete ionization. The solver has shown robust nonlinear convergence even in the milli-Kelvin temperature range, and has been extensively used to quickly obtain the semiclassical electrostatic potential in DQD devices. The self-consistent Schrodinger-Poisson solver has achieved robust and monotonic convergence behavior for 1D/2D/3D quantum devices at very low temperatures by using a predictor-correct iteration scheme. The QCAD simulator enables the calculation of dot-to-gate capacitances, and comparison with experiment and between solvers. It is observed that computed capacitances are in the right ballpark when compared to experiment, and quantum con nement increases capacitance when the number of electrons is xed in a quantum dot. In addition, the coupling of QCAD with Dakota allows to rapidly identify which device layouts are more likely leading to few-electron quantum dots. Very efficient QCAD simulations on a large number of fabricated and proposed Si DQDs have made it possible to provide fast feedback for design comparison and optimization.« less

  7. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  8. Subject-enabled analytics model on measurement statistics in health risk expert system for public health informatics.

    PubMed

    Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun

    2017-11-01

    This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Optimization of the molecular dynamics method for simulations of DNA and ion transport through biological nanopores.

    PubMed

    Wells, David B; Bhattacharya, Swati; Carr, Rogan; Maffeo, Christopher; Ho, Anthony; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Molecular dynamics (MD) simulations have become a standard method for the rational design and interpretation of experimental studies of DNA translocation through nanopores. The MD method, however, offers a multitude of algorithms, parameters, and other protocol choices that can affect the accuracy of the resulting data as well as computational efficiency. In this chapter, we examine the most popular choices offered by the MD method, seeking an optimal set of parameters that enable the most computationally efficient and accurate simulations of DNA and ion transport through biological nanopores. In particular, we examine the influence of short-range cutoff, integration timestep and force field parameters on the temperature and concentration dependence of bulk ion conductivity, ion pairing, ion solvation energy, DNA structure, DNA-ion interactions, and the ionic current through a nanopore.

  10. Atmospheric simulation using a liquid crystal wavefront-controlling device

    NASA Astrophysics Data System (ADS)

    Brooks, Matthew R.; Goda, Matthew E.

    2004-10-01

    Test and evaluation of laser warning devices is important due to the increased use of laser devices in aerial applications. This research consists of an atmospheric aberrating system to enable in-lab testing of various detectors and sensors. This system employs laser light at 632.8nm from a Helium-Neon source and a spatial light modulator (SLM) to cause phase changes using a birefringent liquid crystal material. Measuring outgoing radiation from the SLM using a CCD targetboard and Shack-Hartmann wavefront sensor reveals an acceptable resemblance of system output to expected atmospheric theory. Over three turbulence scenarios, an error analysis reveals that turbulence data matches theory. A wave optics computer simulation is created analogous to the lab-bench design. Phase data, intensity data, and a computer simulation affirm lab-bench results so that the aberrating SLM system can be operated confidently.

  11. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  12. The Personal Motion Platform

    NASA Technical Reports Server (NTRS)

    Park, Brian Vandellyn

    1993-01-01

    The Neutral Body Posture experienced in microgravity creates a biomechanical equilibrium by enabling the internal forces within the body to find their own balance. A patented reclining chair based on this posture provides a minimal stress environment for interfacing with computer systems for extended periods. When the chair is mounted on a 3 or 6 axis motion platform, a generic motion simulator for simulated digital environments is created. The Personal Motion Platform provides motional feedback to the occupant in synchronization with their movements inside the digital world which enhances the simulation experience. Existing HMD based simulation systems can be integrated to the turnkey system. Future developments are discussed.

  13. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    NASA Astrophysics Data System (ADS)

    Hoepfer, Matthias

    Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.

  14. Construction of Interaction Layer on Socio-Environmental Simulation

    NASA Astrophysics Data System (ADS)

    Torii, Daisuke; Ishida, Toru

    In this study, we propose a method to construct a system based on a legacy socio-environmental simulator which enables to design more realistic interaction models in socio-environmetal simulations. First, to provide a computational model suitable for agent interactions, an interaction layer is constructed and connected from outside of a legacy socio-environmental simulator. Next, to configure the agents interacting ability, connection description for controlling the flow of information in the connection area is provided. As a concrete example, we realized an interaction layer by Q which is a scenario description language and connected it to CORMAS, a socio-envirionmental simulator. Finally, we discuss the capability of our method, using the system, in the Fire-Fighter domain.

  15. Using Modeling and Simulation to Complement Testing for Increased Understanding of Weapon Subassembly Response.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Michael K.; Davidson, Megan

    As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less

  16. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-10-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  17. Multiscale Hy3S: hybrid stochastic simulation for supercomputers.

    PubMed

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-02-24

    Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  18. Fast Numerical Simulation of Focused Ultrasound Treatments During Respiratory Motion With Discontinuous Motion Boundaries.

    PubMed

    Schwenke, Michael; Georgii, Joachim; Preusser, Tobias

    2017-07-01

    Focused ultrasound (FUS) is rapidly gaining clinical acceptance for several target tissues in the human body. Yet, treating liver targets is not clinically applied due to a high complexity of the procedure (noninvasiveness, target motion, complex anatomy, blood cooling effects, shielding by ribs, and limited image-based monitoring). To reduce the complexity, numerical FUS simulations can be utilized for both treatment planning and execution. These use-cases demand highly accurate and computationally efficient simulations. We propose a numerical method for the simulation of abdominal FUS treatments during respiratory motion of the organs and target. Especially, a novel approach is proposed to simulate the heating during motion by solving Pennes' bioheat equation in a computational reference space, i.e., the equation is mathematically transformed to the reference. The approach allows for motion discontinuities, e.g., the sliding of the liver along the abdominal wall. Implementing the solver completely on the graphics processing unit and combining it with an atlas-based ultrasound simulation approach yields a simulation performance faster than real time (less than 50-s computing time for 100 s of treatment time) on a modern off-the-shelf laptop. The simulation method is incorporated into a treatment planning demonstration application that allows to simulate real patient cases including respiratory motion. The high performance of the presented simulation method opens the door to clinical applications. The methods bear the potential to enable the application of FUS for moving organs.

  19. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  20. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  1. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  2. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  3. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  4. Efficient simulation and model reformulation of two-dimensional electrochemical thermal behavior of lithium-ion batteries

    DOE PAGES

    Northrop, Paul W. C.; Pathak, Manan; Rife, Derek; ...

    2015-03-09

    Lithium-ion batteries are an important technology to facilitate efficient energy storage and enable a shift from petroleum based energy to more environmentally benign sources. Such systems can be utilized most efficiently if good understanding of performance can be achieved for a range of operating conditions. Mathematical models can be useful to predict battery behavior to allow for optimization of design and control. An analytical solution is ideally preferred to solve the equations of a mathematical model, as it eliminates the error that arises when using numerical techniques and is usually computationally cheap. An analytical solution provides insight into the behaviormore » of the system and also explicitly shows the effects of different parameters on the behavior. However, most engineering models, including the majority of battery models, cannot be solved analytically due to non-linearities in the equations and state dependent transport and kinetic parameters. The numerical method used to solve the system of equations describing a battery operation can have a significant impact on the computational cost of the simulation. In this paper, a model reformulation of the porous electrode pseudo three dimensional (P3D) which significantly reduces the computational cost of lithium ion battery simulation, while maintaining high accuracy, is discussed. This reformulation enables the use of the P3D model into applications that would otherwise be too computationally expensive to justify its use, such as online control, optimization, and parameter estimation. Furthermore, the P3D model has proven to be robust enough to allow for the inclusion of additional physical phenomena as understanding improves. In this study, the reformulated model is used to allow for more complicated physical phenomena to be considered for study, including thermal effects.« less

  5. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-01

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.

  6. Applying GIS and high performance agent-based simulation for managing an Old World Screwworm fly invasion of Australia.

    PubMed

    Welch, M C; Kwan, P W; Sajeev, A S M

    2014-10-01

    Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  7. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solutionmore » can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.« less

  8. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  9. Loci-STREAM Version 0.9

    NASA Technical Reports Server (NTRS)

    Wright, Jeffrey; Thakur, Siddharth

    2006-01-01

    Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.

  10. Efficient coarse simulation of a growing avascular tumor

    PubMed Central

    Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.

    2013-01-01

    The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128

  11. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  12. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  13. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  14. 20170312 - In Silico Dynamics: computer simulation in a ...

    EPA Pesticide Factsheets

    Abstract: Utilizing cell biological information to predict higher order biological processes is a significant challenge in predictive toxicology. This is especially true for highly dynamical systems such as the embryo where morphogenesis, growth and differentiation require precisely orchestrated interactions between diverse cell populations. In patterning the embryo, genetic signals setup spatial information that cells then translate into a coordinated biological response. This can be modeled as ‘biowiring diagrams’ representing genetic signals and responses. Because the hallmark of multicellular organization resides in the ability of cells to interact with one another via well-conserved signaling pathways, multiscale computational (in silico) models that enable these interactions provide a platform to translate cellular-molecular lesions perturbations into higher order predictions. Just as ‘the Cell’ is the fundamental unit of biology so too should it be the computational unit (‘Agent’) for modeling embryogenesis. As such, we constructed multicellular agent-based models (ABM) with ‘CompuCell3D’ (www.compucell3d.org) to simulate kinematics of complex cell signaling networks and enable critical tissue events for use in predictive toxicology. Seeding the ABMs with HTS/HCS data from ToxCast demonstrated the potential to predict, quantitatively, the higher order impacts of chemical disruption at the cellular or bioche

  15. In Silico Dynamics: computer simulation in a Virtual Embryo ...

    EPA Pesticide Factsheets

    Abstract: Utilizing cell biological information to predict higher order biological processes is a significant challenge in predictive toxicology. This is especially true for highly dynamical systems such as the embryo where morphogenesis, growth and differentiation require precisely orchestrated interactions between diverse cell populations. In patterning the embryo, genetic signals setup spatial information that cells then translate into a coordinated biological response. This can be modeled as ‘biowiring diagrams’ representing genetic signals and responses. Because the hallmark of multicellular organization resides in the ability of cells to interact with one another via well-conserved signaling pathways, multiscale computational (in silico) models that enable these interactions provide a platform to translate cellular-molecular lesions perturbations into higher order predictions. Just as ‘the Cell’ is the fundamental unit of biology so too should it be the computational unit (‘Agent’) for modeling embryogenesis. As such, we constructed multicellular agent-based models (ABM) with ‘CompuCell3D’ (www.compucell3d.org) to simulate kinematics of complex cell signaling networks and enable critical tissue events for use in predictive toxicology. Seeding the ABMs with HTS/HCS data from ToxCast demonstrated the potential to predict, quantitatively, the higher order impacts of chemical disruption at the cellular or biochemical level. This is demonstrate

  16. Development of the Glenn-HT Computer Code to Enable Time-Filtered Navier-Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Philip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations which are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminarturbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes which take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-HT code and applied to film cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30 holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and four blowing ratios of 0.5, 1.0, 1.5 and 2.0 are shown. Flow features under those conditions are also described.

  17. Bilayer avalanche spin-diode logic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, Joseph S., E-mail: joseph.friedman@u-psud.fr; Querlioz, Damien; Fadel, Eric R.

    2015-11-15

    A novel spintronic computing paradigm is proposed and analyzed in which InSb p-n bilayer avalanche spin-diodes are cascaded to efficiently perform complex logic operations. This spin-diode logic family uses control wires to generate magnetic fields that modulate the resistance of the spin-diodes, and currents through these devices control the resistance of cascaded devices. Electromagnetic simulations are performed to demonstrate the cascading mechanism, and guidelines are provided for the development of this innovative computing technology. This cascading scheme permits compact logic circuits with switching speeds determined by electromagnetic wave propagation rather than electron motion, enabling high-performance spintronic computing.

  18. Computation of transmitted and received B1 fields in magnetic resonance imaging.

    PubMed

    Milles, Julien; Zhu, Yue Min; Chen, Nan-Kuei; Panych, Lawrence P; Gimenez, Gérard; Guttmann, Charles R G

    2006-05-01

    Computation of B1 fields is a key issue for determination and correction of intensity nonuniformity in magnetic resonance images. This paper presents a new method for computing transmitted and received B1 fields. Our method combines a modified MRI acquisition protocol and an estimation technique based on the Levenberg-Marquardt algorithm and spatial filtering. It enables accurate estimation of transmitted and received B1 fields for both homogeneous and heterogeneous objects. The method is validated using numerical simulations and experimental data from phantom and human scans. The experimental results are in agreement with theoretical expectations.

  19. Intermediate quantum maps for quantum computation

    NASA Astrophysics Data System (ADS)

    Giraud, O.; Georgeot, B.

    2005-10-01

    We study quantum maps displaying spectral statistics intermediate between Poisson and Wigner-Dyson. It is shown that they can be simulated on a quantum computer with a small number of gates, and efficiently yield information about fidelity decay or spectral statistics. We study their matrix elements and entanglement production and show that they converge with time to distributions which differ from random matrix predictions. A randomized version of these maps can be implemented even more economically and yields pseudorandom operators with original properties, enabling, for example, one to produce fractal random vectors. These algorithms are within reach of present-day quantum computers.

  20. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  1. Models, Databases, and Simulation Tools Needed for the Realization of Integrated Computational Materials Engineering. Proceedings of the Symposium Held at Materials Science and Technology 2010

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M. (Editor); Wong, Terry T. (Editor)

    2011-01-01

    Topics covered include: An Annotative Review of Multiscale Modeling and its Application to Scales Inherent in the Field of ICME; and A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures.

  2. Tango

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Jeffrey

    Tango enables the accelerated numerical solution of the multiscale problem of self-consistent transport and turbulence. Fast turbulence results in fluxes of heat and particles that slowly change the mean profiles of temperature and density. The fluxes are computed by separate turbulence simulation codes; Tang solves for the self-consistent change in mean temperature or density given those fluxes.

  3. Fluid-Structure Interaction Analysis of Ruptured Mitral Chordae Tendineae.

    PubMed

    Toma, Milan; Bloodworth, Charles H; Pierce, Eric L; Einstein, Daniel R; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2017-03-01

    The chordal structure is a part of mitral valve geometry that has been commonly neglected or simplified in computational modeling due to its complexity. However, these simplifications cannot be used when investigating the roles of individual chordae tendineae in mitral valve closure. For the first time, advancements in imaging, computational techniques, and hardware technology make it possible to create models of the mitral valve without simplifications to its complex geometry, and to quickly run validated computer simulations that more realistically capture its function. Such simulations can then be used for a detailed analysis of chordae-related diseases. In this work, a comprehensive model of a subject-specific mitral valve with detailed chordal structure is used to analyze the distinct role played by individual chordae in closure of the mitral valve leaflets. Mitral closure was simulated for 51 possible chordal rupture points. Resultant regurgitant orifice area and strain change in the chordae at the papillary muscle tips were then calculated to examine the role of each ruptured chorda in the mitral valve closure. For certain subclassifications of chordae, regurgitant orifice area was found to trend positively with ruptured chordal diameter, and strain changes correlated negatively with regurgitant orifice area. Further advancements in clinical imaging modalities, coupled with the next generation of computational techniques will enable more physiologically realistic simulations.

  4. Fluid-Structure Interaction Analysis of Ruptured Mitral Chordae Tendineae

    PubMed Central

    Toma, Milan; Bloodworth, Charles H.; Pierce, Eric L.; Einstein, Daniel R.; Cochran, Richard P.; Yoganathan, Ajit P.; Kunzelman, Karyn S.

    2016-01-01

    The chordal structure is a part of mitral valve geometry that has been commonly neglected or simplified in computational modeling due to its complexity. However, these simplifications cannot be used when investigating the roles of individual chordae tendineae in mitral valve closure. For the first time, advancements in imaging, computational techniques, and hardware technology make it possible to create models of the mitral valve without simplifications to its complex geometry, and to quickly run validated computer simulations that more realistically capture its function. Such simulations can then be used for a detailed analysis of chordae-related diseases. In this work, a comprehensive model of a subject-specific mitral valve with detailed chordal structure is used to analyze the distinct role played by individual chordae in closure of the mitral valve leaflets. Mitral closure was simulated for 51 possible chordal rupture points. Resultant regurgitant orifice area and strain change in the chordae at the papillary muscle tips were then calculated to examine the role of each ruptured chorda in the mitral valve closure. For certain subclassifications of chordae, regurgitant orifice area was found to trend positively with ruptured chordal diameter, and strain changes correlated negatively with regurgitant orifice area. Further advancements in clinical imaging modalities, coupled with the next generation of computational techniques will enable more physiologically realistic simulations. PMID:27624659

  5. Fast simulation tool for ultraviolet radiation at the earth's surface

    NASA Astrophysics Data System (ADS)

    Engelsen, Ola; Kylling, Arve

    2005-04-01

    FastRT is a fast, yet accurate, UV simulation tool that computes downward surface UV doses, UV indices, and irradiances in the spectral range 290 to 400 nm with a resolution as small as 0.05 nm. It computes a full UV spectrum within a few milliseconds on a standard PC, and enables the user to convolve the spectrum with user-defined and built-in spectral response functions including the International Commission on Illumination (CIE) erythemal response function used for UV index calculations. The program accounts for the main radiative input parameters, i.e., instrumental characteristics, solar zenith angle, ozone column, aerosol loading, clouds, surface albedo, and surface altitude. FastRT is based on look-up tables of carefully selected entries of atmospheric transmittances and spherical albedos, and exploits the smoothness of these quantities with respect to atmospheric, surface, geometrical, and spectral parameters. An interactive site, http://nadir.nilu.no/~olaeng/fastrt/fastrt.html, enables the public to run the FastRT program with most input options. This page also contains updated information about FastRT and links to freely downloadable source codes and binaries.

  6. Final report for the DOE Early Career Award #DE-SC0003912

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayaraman, Arthi

    This DoE supported early career project was aimed at developing computational models, theory and simulation methods that would be then be used to predict assembly and morphology in polymer nanocomposites. In particular, the focus was on composites in active layers of devices, containing conducting polymers that act as electron donors and nanoscale additives that act as electron acceptors. During the course this work, we developed the first of its kind molecular models to represent conducting polymers enabling simulations at the experimentally relevant length and time scales. By comparison with experimentally observed morphologies we validated these models. Furthermore, using these modelsmore » and molecular dynamics simulations on graphical processing units (GPUs) we predicted the molecular level design features in polymers and additive that lead to morphologies with optimal features for charge carrier behavior in solar cells. Additionally, we also predicted computationally new design rules for better dispersion of additives in polymers that have been confirmed through experiments. Achieving dispersion in polymer nanocomposites is valuable to achieve controlled macroscopic properties of the composite. The results obtained during the course of this DOE funded project enables optimal design of higher efficiency organic electronic and photovoltaic devices and improve every day life with engineering of these higher efficiency devices.« less

  7. Hierarchical Order Parameters for Macromolecular Assembly Simulations I: Construction and Dynamical Properties of Order Parameters

    PubMed Central

    Singharoy, Abhishek; Sereda, Yuriy

    2012-01-01

    Macromolecular assemblies often display a hierarchical organization of macromolecules or their sub-assemblies. To model this, we have formulated a space warping method that enables capturing overall macromolecular structure and dynamics via a set of coarse-grained order parameters (OPs). This article is the first of two describing the construction and computational implementation of an additional class of OPs that has built into them the hierarchical architecture of macromolecular assemblies. To accomplish this, first, the system is divided into subsystems, each of which is described via a representative set of OPs. Then, a global set of variables is constructed from these subsystem-centered OPs to capture overall system organization. Dynamical properties of the resulting OPs are compared to those of our previous nonhierarchical ones, and implied conceptual and computational advantages are discussed for a 100ns, 2 million atom solvated Human Papillomavirus-like particle simulation. In the second article, the hierarchical OPs are shown to enable a multiscale analysis that starts with the N-atom Liouville equation and yields rigorous Langevin equations of stochastic OP dynamics. The latter is demonstrated via a force-field based simulation algorithm that probes key structural transition pathways, simultaneously accounting for all-atom details and overall structure. PMID:22661911

  8. Fast simulation of Proton Induced X-Ray Emission Tomography using CUDA

    NASA Astrophysics Data System (ADS)

    Beasley, D. G.; Marques, A. C.; Alves, L. C.; da Silva, R. C.

    2013-07-01

    A new 3D Proton Induced X-Ray Emission Tomography (PIXE-T) and Scanning Transmission Ion Microscopy Tomography (STIM-T) simulation software has been developed in Java and uses NVIDIA™ Common Unified Device Architecture (CUDA) to calculate the X-ray attenuation for large detector areas. A challenge with PIXE-T is to get sufficient counts while retaining a small beam spot size. Therefore a high geometric efficiency is required. However, as the detector solid angle increases the calculations required for accurate reconstruction of the data increase substantially. To overcome this limitation, the CUDA parallel computing platform was used which enables general purpose programming of NVIDIA graphics processing units (GPUs) to perform computations traditionally handled by the central processing unit (CPU). For simulation performance evaluation, the results of a CPU- and a CUDA-based simulation of a phantom are presented. Furthermore, a comparison with the simulation code in the PIXE-Tomography reconstruction software DISRA (A. Sakellariou, D.N. Jamieson, G.J.F. Legge, 2001) is also shown. Compared to a CPU implementation, the CUDA based simulation is approximately 30× faster.

  9. Simulating Vibrations in a Complex Loaded Structure

    NASA Technical Reports Server (NTRS)

    Cao, Tim T.

    2005-01-01

    The Dynamic Response Computation (DIRECT) computer program simulates vibrations induced in a complex structure by applied dynamic loads. Developed to enable rapid analysis of launch- and landing- induced vibrations and stresses in a space shuttle, DIRECT also can be used to analyze dynamic responses of other structures - for example, the response of a building to an earthquake, or the response of an oil-drilling platform and attached tanks to large ocean waves. For a space-shuttle simulation, the required input to DIRECT includes mathematical models of the space shuttle and its payloads, and a set of forcing functions that simulates launch and landing loads. DIRECT can accommodate multiple levels of payload attachment and substructure as well as nonlinear dynamic responses of structural interfaces. DIRECT combines the shuttle and payload models into a single structural model, to which the forcing functions are then applied. The resulting equations of motion are reduced to an optimum set and decoupled into a unique format for simulating dynamics. During the simulation, maximum vibrations, loads, and stresses are monitored and recorded for subsequent analysis to identify structural deficiencies in the shuttle and/or payloads.

  10. Argonne Discovery Yields Self-Healing Diamond-Like Carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunningham, Greg; Jones, Katie Elyce

    We report that large-scale reactive molecular dynamics simulations carried out on the US Department of Energy’s IBM Blue Gene/Q Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing” anti-wear coating that drastically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.

  11. Argonne Discovery Yields Self-Healing Diamond-Like Carbon

    DOE PAGES

    Cunningham, Greg; Jones, Katie Elyce

    2016-10-27

    We report that large-scale reactive molecular dynamics simulations carried out on the US Department of Energy’s IBM Blue Gene/Q Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing” anti-wear coating that drastically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.

  12. Programs for Testing an SSME-Monitoring System

    NASA Technical Reports Server (NTRS)

    Lang, Andre; Cecil, Jimmie; Heusinger, Ralph; Freestone, Kathleen; Blue, Lisa; Wilkerson, DeLisa; McMahon, Leigh Anne; Hall, Richard B.; Varnavas, Kosta; Smith, Keary; hide

    2007-01-01

    A suite of computer programs has been developed for special test equipment (STE) that is used in verification testing of the Health Management Computer Integrated Rack Assembly (HMCIRA), a ground-based system of analog and digital electronic hardware and software for "flight-like" testing for development of components of an advanced health-management system for the space shuttle main engine (SSME). The STE software enables the STE to simulate the analog input and the data flow of an SSME test firing from start to finish.

  13. Internet-based system for simulation-based medical planning for cardiovascular disease.

    PubMed

    Steele, Brooke N; Draney, Mary T; Ku, Joy P; Taylor, Charles A

    2003-06-01

    Current practice in vascular surgery utilizes only diagnostic and empirical data to plan treatments, which does not enable quantitative a priori prediction of the outcomes of interventions. We have previously described simulation-based medical planning methods to model blood flow in arteries and plan medical treatments based on physiologic models. An important consideration for the design of these patient-specific modeling systems is the accessibility to physicians with modest computational resources. We describe a simulation-based medical planning environment developed for the World Wide Web (WWW) using the Virtual Reality Modeling Language (VRML) and the Java programming language.

  14. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Lytle, John K. (Technical Monitor)

    2002-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.

  15. Modeling cardiovascular hemodynamics using the lattice Boltzmann method on massively parallel supercomputers

    NASA Astrophysics Data System (ADS)

    Randles, Amanda Elizabeth

    Accurate and reliable modeling of cardiovascular hemodynamics has the potential to improve understanding of the localization and progression of heart diseases, which are currently the most common cause of death in Western countries. However, building a detailed, realistic model of human blood flow is a formidable mathematical and computational challenge. The simulation must combine the motion of the fluid, the intricate geometry of the blood vessels, continual changes in flow and pressure driven by the heartbeat, and the behavior of suspended bodies such as red blood cells. Such simulations can provide insight into factors like endothelial shear stress that act as triggers for the complex biomechanical events that can lead to atherosclerotic pathologies. Currently, it is not possible to measure endothelial shear stress in vivo, making these simulations a crucial component to understanding and potentially predicting the progression of cardiovascular disease. In this thesis, an approach for efficiently modeling the fluid movement coupled to the cell dynamics in real-patient geometries while accounting for the additional force from the expansion and contraction of the heart will be presented and examined. First, a novel method to couple a mesoscopic lattice Boltzmann fluid model to the microscopic molecular dynamics model of cell movement is elucidated. A treatment of red blood cells as extended structures, a method to handle highly irregular geometries through topology driven graph partitioning, and an efficient molecular dynamics load balancing scheme are introduced. These result in a large-scale simulation of the cardiovascular system, with a realistic description of the complex human arterial geometry, from centimeters down to the spatial resolution of red-blood cells. The computational methods developed to enable scaling of the application to 294,912 processors are discussed, thus empowering the simulation of a full heartbeat. Second, further extensions to enable the modeling of fluids in vessels with smaller diameters and a method for introducing the deformational forces exerted on the arterial flows from the movement of the heart by borrowing concepts from cosmodynamics are presented. These additional forces have a great impact on the endothelial shear stress. Third, the fluid model is extended to not only recover Navier-Stokes hydrodynamics, but also a wider range of Knudsen numbers, which is especially important in micro- and nano-scale flows. The tradeoffs of many optimizations methods such as the use of deep halo level ghost cells that, alongside hybrid programming models, reduce the impact of such higher-order models and enable efficient modeling of extreme regimes of computational fluid dynamics are discussed. Fourth, the extension of these models to other research questions like clogging in microfluidic devices and determining the severity of co-arctation of the aorta is presented. Through this work, a validation of these methods by taking real patient data and the measured pressure value before the narrowing of the aorta and predicting the pressure drop across the co-arctation is shown. Comparison with the measured pressure drop in vivo highlights the accuracy and potential impact of such patient specific simulations. Finally, a method to enable the simulation of longer trajectories in time by discretizing both spatially and temporally is presented. In this method, a serial coarse iterator is used to initialize data at discrete time steps for a fine model that runs in parallel. This coarse solver is based on a larger time step and typically a coarser discretization in space. Iterative refinement enables the compute-intensive fine iterator to be modeled with temporal parallelization. The algorithm consists of a series of prediction-corrector iterations completing when the results have converged within a certain tolerance. Combined, these developments allow large fluid models to be simulated for longer time durations than previously possible.

  16. Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles

    PubMed Central

    Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.

    2014-01-01

    We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845

  17. Real-time haptic cutting of high-resolution soft tissues.

    PubMed

    Wu, Jun; Westermann, Rüdiger; Dick, Christian

    2014-01-01

    We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.

  18. Large Scale Geologic Controls on Hydraulic Stimulation

    NASA Astrophysics Data System (ADS)

    McLennan, J. D.; Bhide, R.

    2014-12-01

    When simulating a hydraulic fracturing, the analyst has historically prescribed a single planar fracture. Originally (in the 1950s through the 1970s) this was necessitated by computational restrictions. In the latter part of the twentieth century, hydraulic fracture simulation evolved to incorporate vertical propagation controlled by modulus, fluid loss, and the minimum principal stress. With improvements in software, computational capacity, and recognition that in-situ discontinuities are relevant, fully three-dimensional hydraulic simulation is now becoming possible. Advances in simulation capabilities enable coupling structural geologic data (three-dimensional representation of stresses, natural fractures, and stratigraphy) with decision making processes for stimulation - volumes, rates, fluid types, completion zones. Without this interaction between simulation capabilities and geological information, low permeability formation exploitation may linger on the fringes of real economic viability. Comparative simulations have been undertaken in varying structural environments where the stress contrast and the frequency of natural discontinuities causes varying patterns of multiple, hydraulically generated or reactivated flow paths. Stress conditions and nature of the discontinuities are selected as variables and are used to simulate how fracturing can vary in different structural regimes. The basis of the simulations is commercial distinct element software (Itasca Corporation's 3DEC).

  19. The Virtual Mouse Brain: A Computational Neuroinformatics Platform to Study Whole Mouse Brain Dynamics.

    PubMed

    Melozzi, Francesca; Woodman, Marmaduke M; Jirsa, Viktor K; Bernard, Christophe

    2017-01-01

    Connectome-based modeling of large-scale brain network dynamics enables causal in silico interrogation of the brain's structure-function relationship, necessitating the close integration of diverse neuroinformatics fields. Here we extend the open-source simulation software The Virtual Brain (TVB) to whole mouse brain network modeling based on individual diffusion magnetic resonance imaging (dMRI)-based or tracer-based detailed mouse connectomes. We provide practical examples on how to use The Virtual Mouse Brain (TVMB) to simulate brain activity, such as seizure propagation and the switching behavior of the resting state dynamics in health and disease. TVMB enables theoretically driven experimental planning and ways to test predictions in the numerous strains of mice available to study brain function in normal and pathological conditions.

  20. Physics-based subsurface visualization of human tissue.

    PubMed

    Sharp, Richard; Adams, Jacob; Machiraju, Raghu; Lee, Robert; Crane, Robert

    2007-01-01

    In this paper, we present a framework for simulating light transport in three-dimensional tissue with inhomogeneous scattering properties. Our approach employs a computational model to simulate light scattering in tissue through the finite element solution of the diffusion equation. Although our model handles both visible and nonvisible wavelengths, we especially focus on the interaction of near infrared (NIR) light with tissue. Since most human tissue is permeable to NIR light, tools to noninvasively image tumors, blood vasculature, and monitor blood oxygenation levels are being constructed. We apply this model to a numerical phantom to visually reproduce the images generated by these real-world tools. Therefore, in addition to enabling inverse design of detector instruments, our computational tools produce physically-accurate visualizations of subsurface structures.

  1. COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.; COIN Collaboration

    2015-11-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.

  2. Computer-assisted orthognathic surgery combined with fibular osteomyocutaneous flap reconstruction to correct facial asymmetry and maxillary defects secondary to maxillectomy in childhood.

    PubMed

    Zhang, Lei; Sun, Hao; Yu, Hong-bo; Yuan, Hao; Shen, Guo-fang; Wang, Xu-dong

    2013-05-01

    Maxillectomy in childhood not only causes composite primary defects but also secondary malformation of the middle and lower face. In the case presented, we introduced computer-assisted planning and simulation of orthognathic surgery combined with fibular osteomyocutaneous flap reconstruction to correct complex craniofacial deformities. Virtual orthognathic surgery and maxillary reconstruction surgery were undertaken preoperatively. LeFort I osteotomy, with bilateral sagittal split ramus osteotomy and lower border ostectomy, was performed to correct malocclusion and facial asymmetry. Maxillary reconstruction was accomplished using a fibular osteomyocutaneous flap. The patient recovered uneventfully with an adequate aesthetic appearance on 3D computed tomography. Our experience indicates that orthognathic surgery combined with fibular osteomyocutaneous flap reconstruction can used to correct complex facial asymmetry and maxillary defects secondary to maxillectomy. Computer-assisted simulation enables precise execution of the reconstruction. It shortens the free flap ischemia time and reduces the risks associated with microsurgery.

  3. Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 2: Concept document

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Simulation Computer System (SCS) concept document describes and establishes requirements for the functional performance of the SCS system, including interface, logistic, and qualification requirements. The SCS is the computational communications and display segment of the Marshall Space Flight Center (MSFC) Payload Training Complex (PTC). The PTC is the MSFC facility that will train onboard and ground operations personnel to operate the payloads and experiments on board the international Space Station Freedom. The requirements to be satisfied by the system implementation are identified here. The SCS concept document defines the requirements to be satisfied through the implementation of the system capability. The information provides the operational basis for defining the requirements to be allocated to the system components and enables the system organization to assess whether or not the completed system complies with the requirements of the system.

  4. Design of a Variational Multiscale Method for Turbulent Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.

  5. Equation-free multiscale computation: algorithms and applications.

    PubMed

    Kevrekidis, Ioannis G; Samaey, Giovanni

    2009-01-01

    In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.

  6. Computing Radiative Transfer in a 3D Medium

    NASA Technical Reports Server (NTRS)

    Von Allmen, Paul; Lee, Seungwon

    2012-01-01

    A package of software computes the time-dependent propagation of a narrow laser beam in an arbitrary three- dimensional (3D) medium with absorption and scattering, using the transient-discrete-ordinates method and a direct integration method. Unlike prior software that utilizes a Monte Carlo method, this software enables simulation at very small signal-to-noise ratios. The ability to simulate propagation of a narrow laser beam in a 3D medium is an improvement over other discrete-ordinate software. Unlike other direct-integration software, this software is not limited to simulation of propagation of thermal radiation with broad angular spread in three dimensions or of a laser pulse with narrow angular spread in two dimensions. Uses for this software include (1) computing scattering of a pulsed laser beam on a material having given elastic scattering and absorption profiles, and (2) evaluating concepts for laser-based instruments for sensing oceanic turbulence and related measurements of oceanic mixed-layer depths. With suitable augmentation, this software could be used to compute radiative transfer in ultrasound imaging in biological tissues, radiative transfer in the upper Earth crust for oil exploration, and propagation of laser pulses in telecommunication applications.

  7. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions

    PubMed Central

    Box, Simon

    2014-01-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570

  8. Performance of hybrid programming models for multiscale cardiac simulations: preparing for petascale computation.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-10-01

    Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.

  9. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    PubMed

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  10. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  11. A streaming multi-GPU implementation of image simulation algorithms for scanning transmission electron microscopy

    DOE PAGES

    Pryor, Alan; Ophus, Colin; Miao, Jianwei

    2017-10-25

    Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less

  12. A streaming multi-GPU implementation of image simulation algorithms for scanning transmission electron microscopy.

    PubMed

    Pryor, Alan; Ophus, Colin; Miao, Jianwei

    2017-01-01

    Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. Here, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditional multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic , using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic .

  13. A streaming multi-GPU implementation of image simulation algorithms for scanning transmission electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pryor, Alan; Ophus, Colin; Miao, Jianwei

    Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less

  14. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE PAGES

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  15. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geveci, Berk; Maynard, Robert

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less

  16. High Altitude Balloon Flight Path Prediction and Site Selection Based On Computer Simulations

    NASA Astrophysics Data System (ADS)

    Linford, Joel

    2010-10-01

    Interested in the upper atmosphere, Weber State University Physics department has developed a High Altitude Reconnaissance Balloon for Outreach and Research team, also known as HARBOR. HARBOR enables Weber State University to take a variety of measurements from ground level to altitudes as high as 100,000 feet. The flight paths of these balloons can extend as long as 100 miles from the launch zone, making the choice of where and when to fly critical. To ensure the ability to recover the packages in a reasonable amount of time, days and times are carefully selected using computer simulations limiting flight tracks to approximately 40 miles from the launch zone. The computer simulations take atmospheric data collected by National Oceanic and Atmospheric Administration (NOAA) to plot what flights might have looked like in the past, and to predict future flights. Using these simulations a launch zone has been selected in Duchesne Utah, which has hosted eight successful flights over the course of the last three years, all of which have been recovered. Several secondary launch zones in western Wyoming, Southern Idaho, and Northern Utah are also being considered.

  17. RFA Guardian: Comprehensive Simulation of Radiofrequency Ablation Treatment of Liver Tumors.

    PubMed

    Voglreiter, Philip; Mariappan, Panchatcharam; Pollari, Mika; Flanagan, Ronan; Blanco Sequeiros, Roberto; Portugaller, Rupert Horst; Fütterer, Jurgen; Schmalstieg, Dieter; Kolesnik, Marina; Moche, Michael

    2018-01-15

    The RFA Guardian is a comprehensive application for high-performance patient-specific simulation of radiofrequency ablation of liver tumors. We address a wide range of usage scenarios. These include pre-interventional planning, sampling of the parameter space for uncertainty estimation, treatment evaluation and, in the worst case, failure analysis. The RFA Guardian is the first of its kind that exhibits sufficient performance for simulating treatment outcomes during the intervention. We achieve this by combining a large number of high-performance image processing, biomechanical simulation and visualization techniques into a generalized technical workflow. Further, we wrap the feature set into a single, integrated application, which exploits all available resources of standard consumer hardware, including massively parallel computing on graphics processing units. This allows us to predict or reproduce treatment outcomes on a single personal computer with high computational performance and high accuracy. The resulting low demand for infrastructure enables easy and cost-efficient integration into the clinical routine. We present a number of evaluation cases from the clinical practice where users performed the whole technical workflow from patient-specific modeling to final validation and highlight the opportunities arising from our fast, accurate prediction techniques.

  18. Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.

    PubMed

    Slażyński, Leszek; Bohte, Sander

    2012-01-01

    The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.

  19. Designing and using computer simulations in medical education and training: an introduction.

    PubMed

    Friedl, Karl E; O'Neil, Harold F

    2013-10-01

    Computer-based technologies informed by the science of learning are becoming increasingly prevalent in education and training. For the Department of Defense (DoD), this presents a great potential advantage to the effective preparation of a new generation of technologically enabled service members. Military medicine has broad education and training challenges ranging from first aid and personal protective skills for every service member to specialized combat medic training; many of these challenges can be met with gaming and simulation technologies that this new generation has embraced. However, comprehensive use of medical games and simulation to augment expert mentorship is still limited to elite medical provider training programs, but can be expected to become broadly used in the training of first responders and allied health care providers. The purpose of this supplement is to review the use of computer games and simulation to teach and assess medical knowledge and skills. This review and other DoD research policy sources will form the basis for development of a research and development road map and guidelines for use of this technology in military medicine. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  20. An evaluation of noise reduction algorithms for particle-based fluid simulations in multi-scale applications

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.

    2016-11-01

    Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.

  1. Contact dynamics math model

    NASA Technical Reports Server (NTRS)

    Glaese, John R.; Tobbe, Patrick A.

    1986-01-01

    The Space Station Mechanism Test Bed consists of a hydraulically driven, computer controlled six degree of freedom (DOF) motion system with which docking, berthing, and other mechanisms can be evaluated. Measured contact forces and moments are provided to the simulation host computer to enable representation of orbital contact dynamics. This report describes the development of a generalized math model which represents the relative motion between two rigid orbiting vehicles. The model allows motion in six DOF for each body, with no vehicle size limitation. The rotational and translational equations of motion are derived. The method used to transform the forces and moments from the sensor location to the vehicles' centers of mass is also explained. Two math models of docking mechanisms, a simple translational spring and the Remote Manipulator System end effector, are presented along with simulation results. The translational spring model is used in an attempt to verify the simulation with compensated hardware in the loop results.

  2. Fully-Coupled Thermo-Electrical Modeling and Simulation of Transition Metal Oxide Memristors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamaluy, Denis; Gao, Xujiao; Tierney, Brian David

    2016-11-01

    Transition metal oxide (TMO) memristors have recently attracted special attention from the semiconductor industry and academia. Memristors are one of the strongest candidates to replace flash memory, and possibly DRAM and SRAM in the near future. Moreover, memristors have a high potential to enable beyond-CMOS technology advances in novel architectures for high performance computing (HPC). The utility of memristors has been demonstrated in reprogrammable logic (cross-bar switches), brain-inspired computing and in non-CMOS complementary logic. Indeed, the potential use of memristors as logic devices is especially important considering the inevitable end of CMOS technology scaling that is anticipated by 2025. Inmore » order to aid the on-going Sandia memristor fabrication effort with a memristor design tool and establish a clear physical picture of resistance switching in TMO memristors, we have created and validated with experimental data a simulation tool we name the Memristor Charge Transport (MCT) Simulator.« less

  3. LB3D: A parallel implementation of the Lattice-Boltzmann method for simulation of interacting amphiphilic fluids

    NASA Astrophysics Data System (ADS)

    Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.

    2017-08-01

    We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.

  4. An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2005-01-01

    An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.

  5. GPU Particle Tracking and MHD Simulations with Greatly Enhanced Computational Speed

    NASA Astrophysics Data System (ADS)

    Ziemba, T.; O'Donnell, D.; Carscadden, J.; Cash, M.; Winglee, R.; Harnett, E.

    2008-12-01

    GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for less cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU, and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. 3-D particle tracking and MHD codes have been developed using NVIDIA's CUDA and have demonstrated speed up of nearly a factor of 20 over equivalent CPU versions of the codes. Such a speed up enables new applications to develop, including real time running of radiation belt simulations and real time running of global magnetospheric simulations, both of which could provide important space weather prediction tools.

  6. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duque, Earl P.N.; Whitlock, Brad J.

    High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less

  7. Computing Cluster for Large Scale Turbulence Simulations and Applications in Computational Aeroacoustics

    NASA Astrophysics Data System (ADS)

    Lele, Sanjiva K.

    2002-08-01

    Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.

  8. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    NASA Astrophysics Data System (ADS)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  9. Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.

    PubMed

    Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu

    2017-09-05

    The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Characterization of cardiac flow in heart disease patients by computational fluid dynamics and 4D flow MRI

    NASA Astrophysics Data System (ADS)

    Lantz, Jonas; Gupta, Vikas; Henriksson, Lilian; Karlsson, Matts; Persson, Ander; Carhall, Carljohan; Ebbers, Tino

    2017-11-01

    In this study, cardiac blood flow was simulated using Computational Fluid Dynamics and compared to in vivo flow measurements by 4D Flow MRI. In total, nine patients with various heart diseases were studied. Geometry and heart wall motion for the simulations were obtained from clinical CT measurements, with 0.3x0.3x0.3 mm spatial resolution and 20 time frames covering one heartbeat. The CFD simulations included pulmonary veins, left atrium and ventricle, mitral and aortic valve, and ascending aorta. Mesh sizes were on the order of 6-16 million cells, depending on the size of the heart, in order to resolve both papillary muscles and trabeculae. The computed flow field agreed visually very well with 4D Flow MRI, with characteristic vortices and flow structures seen in both techniques. Regression analysis showed that peak flow rate as well as stroke volume had an excellent agreement for the two techniques. We demonstrated the feasibility, and more importantly, fidelity of cardiac flow simulations by comparing CFD results to in vivo measurements. Both qualitative and quantitative results agreed well with the 4D Flow MRI measurements. Also, the developed simulation methodology enables ``what if'' scenarios, such as optimization of valve replacement and other surgical procedures. Funded by the Wallenberg Foundation.

  11. Multivariable extrapolation of grand canonical free energy landscapes

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-12-01

    We derive an approach for extrapolating the free energy landscape of multicomponent systems in the grand canonical ensemble, obtained from flat-histogram Monte Carlo simulations, from one set of temperature and chemical potentials to another. This is accomplished by expanding the landscape in a Taylor series at each value of the order parameter which defines its macrostate phase space. The coefficients in each Taylor polynomial are known exactly from fluctuation formulas, which may be computed by measuring the appropriate moments of extensive variables that fluctuate in this ensemble. Here we derive the expressions necessary to define these coefficients up to arbitrary order. In principle, this enables a single flat-histogram simulation to provide complete thermodynamic information over a broad range of temperatures and chemical potentials. Using this, we also show how to combine a small number of simulations, each performed at different conditions, in a thermodynamically consistent fashion to accurately compute properties at arbitrary temperatures and chemical potentials. This method may significantly increase the computational efficiency of biased grand canonical Monte Carlo simulations, especially for multicomponent mixtures. Although approximate, this approach is amenable to high-throughput and data-intensive investigations where it is preferable to have a large quantity of reasonably accurate simulation data, rather than a smaller amount with a higher accuracy.

  12. Evaluation of Airframe Noise Reduction Concepts via Simulations Using a Lattice Boltzmann Approach

    NASA Technical Reports Server (NTRS)

    Fares, Ehab; Casalino, Damiano; Khorrami, Mehdi R.

    2015-01-01

    Unsteady computations are presented for a high-fidelity, 18% scale, semi-span Gulfstream aircraft model in landing configuration, i.e. flap deflected at 39 degree and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW® to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. In addition to the baseline geometry, which was presented previously, various noise reduction concepts for the flap and main landing gear are simulated. In particular, care is taken to fully resolve the complex geometrical details associated with these concepts in order to capture the resulting intricate local flow field thus enabling accurate prediction of their acoustic behavior. To determine aeroacoustic performance, the farfield noise predicted with the concepts applied is compared to high-fidelity simulations of the untreated baseline configurations. To assess the accuracy of the computed results, the aerodynamic and aeroacoustic impact of the noise reduction concepts is evaluated numerically and compared to experimental results for the same model. The trends and effectiveness of the simulated noise reduction concepts compare well with measured values and demonstrate that the computational approach is capable of capturing the primary effects of the acoustic treatment on a full aircraft model.

  13. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less

  14. Multiyear Plan for Validation of EnergyPlus Multi-Zone HVAC System Modeling using ORNL's Flexible Research Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Im, Piljae; Bhandari, Mahabir S.; New, Joshua Ryan

    This document describes the Oak Ridge National Laboratory (ORNL) multiyear experimental plan for validation and uncertainty characterization of whole-building energy simulation for a multi-zone research facility using a traditional rooftop unit (RTU) as a baseline heating, ventilating, and air conditioning (HVAC) system. The project’s overarching objective is to increase the accuracy of energy simulation tools by enabling empirical validation of key inputs and algorithms. Doing so is required to inform the design of increasingly integrated building systems and to enable accountability for performance gaps between design and operation of a building. The project will produce documented data sets that canmore » be used to validate key functionality in different energy simulation tools and to identify errors and inadequate assumptions in simulation engines so that developers can correct them. ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2004), currently consists primarily of tests to compare different simulation programs with one another. This project will generate sets of measured data to enable empirical validation, incorporate these test data sets in an extended version of Standard 140, and apply these tests to the Department of Energy’s (DOE) EnergyPlus software (EnergyPlus 2016) to initiate the correction of any significant deficiencies. The fitness-for-purpose of the key algorithms in EnergyPlus will be established and demonstrated, and vendors of other simulation programs will be able to demonstrate the validity of their products. The data set will be equally applicable to validation of other simulation engines as well.« less

  15. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  16. Automated Knowledge Discovery from Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.

    2006-01-01

    In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.

  17. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  18. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  19. A web portal for hydrodynamical, cosmological simulations

    NASA Astrophysics Data System (ADS)

    Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.

    2017-07-01

    This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.

  20. Advanced Computation in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Tang, William

    2001-10-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  1. Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model

    NASA Technical Reports Server (NTRS)

    Segui, John S.; Jennings, Esther H.; Clare, Loren P.

    2013-01-01

    Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.

  2. Towards data warehousing and mining of protein unfolding simulation data.

    PubMed

    Berrar, Daniel; Stahl, Frederic; Silva, Candida; Rodrigues, J Rui; Brito, Rui M M; Dubitzky, Werner

    2005-10-01

    The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

  3. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    PubMed

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  4. BioNSi: A Discrete Biological Network Simulator Tool.

    PubMed

    Rubinstein, Amir; Bracha, Noga; Rudner, Liat; Zucker, Noga; Sloin, Hadas E; Chor, Benny

    2016-08-05

    Modeling and simulation of biological networks is an effective and widely used research methodology. The Biological Network Simulator (BioNSi) is a tool for modeling biological networks and simulating their discrete-time dynamics, implemented as a Cytoscape App. BioNSi includes a visual representation of the network that enables researchers to construct, set the parameters, and observe network behavior under various conditions. To construct a network instance in BioNSi, only partial, qualitative biological data suffices. The tool is aimed for use by experimental biologists and requires no prior computational or mathematical expertise. BioNSi is freely available at http://bionsi.wix.com/bionsi , where a complete user guide and a step-by-step manual can also be found.

  5. Physics-based interactive volume manipulation for sharing surgical process.

    PubMed

    Nakao, Megumi; Minato, Kotaro

    2010-05-01

    This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.

  6. MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows

    NASA Astrophysics Data System (ADS)

    van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro

    2017-08-01

    This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.

  7. OpenGeoSys: Performance-Oriented Computational Methods for Numerical Modeling of Flow in Large Hydrogeological Systems

    NASA Astrophysics Data System (ADS)

    Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.

    2014-12-01

    OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.

  8. Firefighting Trainer

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Firefighting trainees conduct fire control exercises using a prototype simulator known as the Emergency Management Computer Aided Training System (EMCAT). Developed by Marshall Space Flight Center (MFS) in response to a request from the Huntsville (AL) Fire Department, EMCAT enables a trainee to assume the role of fireground commander and make quick decisions on best use of his fire fighting personnel and equipment.

  9. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, S; Rotman, D; Schwegler, E

    The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less

  10. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  11. Computer-Based Technologies in Dentistry: Types and Applications

    PubMed Central

    Albuha Al-Mussawi, Raja’a M.; Farid, Farzaneh

    2016-01-01

    During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice. PMID:28392819

  12. Computer-Based Technologies in Dentistry: Types and Applications.

    PubMed

    Albuha Al-Mussawi, Raja'a M; Farid, Farzaneh

    2016-06-01

    During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice.

  13. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial.

    PubMed

    Kron, Frederick W; Fetters, Michael D; Scerbo, Mark W; White, Casey B; Lypson, Monica L; Padilla, Miguel A; Gliva-McConvey, Gayle A; Belfore, Lee A; West, Temple; Wallace, Amelia M; Guetterman, Timothy C; Schleicher, Lauren S; Kennedy, Rebecca A; Mangrulkar, Rajesh S; Cleary, James F; Marsella, Stacy C; Becker, Daniel M

    2017-04-01

    To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group's experiences and learning preferences. A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR's intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. MPathic-VR's virtual human simulation offers an effective and engaging means of advanced communication training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial

    PubMed Central

    Kron, Frederick W.; Fetters, Michael D.; Scerbo, Mark W.; White, Casey B.; Lypson, Monica L.; Padilla, Miguel A.; Gliva-McConvey, Gayle A.; Belfore, Lee A.; West, Temple; Wallace, Amelia M.; Guetterman, Timothy C.; Schleicher, Lauren S.; Kennedy, Rebecca A.; Mangrulkar, Rajesh S.; Cleary, James F.; Marsella, Stacy C.; Becker, Daniel M.

    2016-01-01

    Objectives To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group’s experiences and learning preferences. Methods A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR’s intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. Secondary outcomes: student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. Results MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. Conclusions MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. Practice Implications MPathic-VR’s virtual human simulation offers an effective and engaging means of advanced communication training. PMID:27939846

  15. NASA HPCC Technology for Aerospace Analysis and Design

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine H.

    1999-01-01

    The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.

  16. In Silico Simulation of a Clinical Trial Concerning Tumour Response to Radiotherapy

    NASA Astrophysics Data System (ADS)

    Dionysiou, Dimitra D.; Stamatakos, Georgios S.; Athanaileas, Theodoras E.; Merrychtas, Andreas; Kaklamani, Dimitra; Varvarigou, Theodora; Uzunoglu, Nikolaos

    2008-11-01

    The aim of this paper is to demonstrate how multilevel tumour growth and response to therapeutic treatment models can be used in order to simulate clinical trials, with the long-term intention of both better designing clinical studies and understanding their outcome based on basic biological science. For this purpose, an already developed computer simulation model of glioblastoma multiforme response to radiotherapy has been used and a clinical study concerning glioblastoma multiforme response to radiotherapy has been simulated. In order to facilitate the simulation of such virtual trials, a toolkit enabling the user-friendly execution of the simulations on grid infrastructures has been designed and developed. The results of the conducted virtual trial are in agreement with the outcome of the real clinical study.

  17. Petascale Kinetic Simulations in Space Sciences: New Simulations and Data Discovery Techniques and Physics Results

    NASA Astrophysics Data System (ADS)

    Karimabadi, Homa

    2012-03-01

    Recent advances in simulation technology and hardware are enabling breakthrough science where many longstanding problems can now be addressed for the first time. In this talk, we focus on kinetic simulations of the Earth's magnetosphere and magnetic reconnection process which is the key mechanism that breaks the protective shield of the Earth's dipole field, allowing the solar wind to enter the Earth's magnetosphere. This leads to the so-called space weather where storms on the Sun can affect space-borne and ground-based technological systems on Earth. The talk will consist of three parts: (a) overview of a new multi-scale simulation technique where each computational grid is updated based on its own unique timestep, (b) Presentation of a new approach to data analysis that we refer to as Physics Mining which entails combining data mining and computer vision algorithms with scientific visualization to extract physics from the resulting massive data sets. (c) Presentation of several recent discoveries in studies of space plasmas including the role of vortex formation and resulting turbulence in magnetized plasmas.

  18. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE PAGES

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...

    2017-11-07

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  19. A Method for Combining Experimentation and Molecular Dynamics Simulation to Improve Cohesive Zone Models for Metallic Microstructures

    NASA Technical Reports Server (NTRS)

    Hochhalter, J. D.; Glaessgen, E. H.; Ingraffea, A. R.; Aquino, W. A.

    2009-01-01

    Fracture processes within a material begin at the nanometer length scale at which the formation, propagation, and interaction of fundamental damage mechanisms occur. Physics-based modeling of these atomic processes quickly becomes computationally intractable as the system size increases. Thus, a multiscale modeling method, based on the aggregation of fundamental damage processes occurring at the nanoscale within a cohesive zone model, is under development and will enable computationally feasible and physically meaningful microscale fracture simulation in polycrystalline metals. This method employs atomistic simulation to provide an optimization loop with an initial prediction of a cohesive zone model (CZM). This initial CZM is then applied at the crack front region within a finite element model. The optimization procedure iterates upon the CZM until the finite element model acceptably reproduces the near-crack-front displacement fields obtained from experimental observation. With this approach, a comparison can be made between the original CZM predicted by atomistic simulation and the converged CZM that is based on experimental observation. Comparison of the two CZMs gives insight into how atomistic simulation scales.

  20. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  1. Spatial interpretation of NASA's Marshall Space Flight Center Payload Operations Control Center using virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1993-01-01

    In its search for higher level computer interfaces and more realistic electronic simulations for measurement and spatial analysis in human factors design, NASA at MSFC is evaluating the functionality of virtual reality (VR) technology. Virtual reality simulation generates a three dimensional environment in which the participant appears to be enveloped. It is a type of interactive simulation in which humans are not only involved, but included. Virtual reality technology is still in the experimental phase, but it appears to be the next logical step after computer aided three-dimensional animation in transferring the viewer from a passive to an active role in experiencing and evaluating an environment. There is great potential for using this new technology when designing environments for more successful interaction, both with the environment and with another participant in a remote location. At the University of North Carolina, a VR simulation of a the planned Sitterson Hall, revealed a flaw in the building's design that had not been observed during examination of the more traditional building plan simulation methods on paper and on computer aided design (CAD) work station. The virtual environment enables multiple participants in remote locations to come together and interact with one another and with the environment. Each participant is capable of seeing herself and the other participants and of interacting with them within the simulated environment.

  2. Embedded ensemble propagation for improving performance, portability, and scalability of uncertainty quantification on emerging computational architectures

    DOE PAGES

    Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...

    2017-04-18

    In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less

  3. Controlling the error on target motion through real-time mesh adaptation: Applications to deep brain stimulation.

    PubMed

    Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A

    2018-05-01

    An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  5. A Computational Framework for Bioimaging Simulation

    PubMed Central

    Watabe, Masaki; Arjunan, Satya N. V.; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. PMID:26147508

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  7. Towards validated chemistry at extreme conditions: reactive MD simulations of shocked Polyvinyl Nitrate and Nitromethane

    NASA Astrophysics Data System (ADS)

    Islam, Md Mahbubul; Strachan, Alejandro

    A detailed atomistic-level understanding of the ultrafast chemistry of detonation processes of high energy materials is crucial to understand their performance and safety. Recent advances in laser shocks and ultra-fast spectroscopy is yielding the first direct experimental evidence of chemistry at extreme conditions. At the same time, reactive molecular dynamics (MD) in current high-performance computing platforms enable an atomic description of shock-induced chemistry with length and timescales approaching those of experiments. We use MD simulations with the reactive force field ReaxFF to investigate the shock-induced chemical decomposition mechanisms of polyvinyl nitrate (PVN) and nitromethane (NM). The effect of shock pressure on chemical reaction mechanisms and kinetics of both the materials are investigated. For direct comparison of our simulation results with experimentally derived IR absorption data, we performed spectral analysis using atomistic velocity at various shock conditions. The combination of reactive MD simulations and ultrafast spectroscopy enables both the validation of ReaxFF at extreme conditions and contributes to the interpretation of the experimental data relating changes in spectral features to atomic processes. Office of Naval Research MURI program.

  8. THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS

    PubMed Central

    Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel

    2010-01-01

    Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618

  9. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de

    2016-02-15

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less

  10. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    NASA Astrophysics Data System (ADS)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  11. Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment

    PubMed Central

    Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon

    2013-01-01

    As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970

  12. Development of a Multi-Disciplinary Computing Environment (MDICE)

    NASA Technical Reports Server (NTRS)

    Kingsley, Gerry; Siegel, John M., Jr.; Harrand, Vincent J.; Lawrence, Charles; Luker, Joel J.

    1999-01-01

    The growing need for and importance of multi-component and multi-disciplinary engineering analysis has been understood for many years. For many applications, loose (or semi-implicit) coupling is optimal, and allows the use of various legacy codes without requiring major modifications. For this purpose, CFDRC and NASA LeRC have developed a computational environment to enable coupling between various flow analysis codes at several levels of fidelity. This has been referred to as the Visual Computing Environment (VCE), and is being successfully applied to the analysis of several aircraft engine components. Recently, CFDRC and AFRL/VAAC (WL) have extended the framework and scope of VCE to enable complex multi-disciplinary simulations. The chosen initial focus is on aeroelastic aircraft applications. The developed software is referred to as MDICE-AE, an extensible system suitable for integration of several engineering analysis disciplines. This paper describes the methodology, basic architecture, chosen software technologies, salient library modules, and the current status of and plans for MDICE. A fluid-structure interaction application is described in a separate companion paper.

  13. A network-based distributed, media-rich computing and information environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, R.L.

    1995-12-31

    Sunrise is a Los Alamos National Laboratory (LANL) project started in October 1993. It is intended to be a prototype National Information Infrastructure development project. A main focus of Sunrise is to tie together enabling technologies (networking, object-oriented distributed computing, graphical interfaces, security, multi-media technologies, and data-mining technologies) with several specific applications. A diverse set of application areas was chosen to ensure that the solutions developed in the project are as generic as possible. Some of the application areas are materials modeling, medical records and image analysis, transportation simulations, and K-12 education. This paper provides a description of Sunrise andmore » a view of the architecture and objectives of this evolving project. The primary objectives of Sunrise are three-fold: (1) To develop common information-enabling tools for advanced scientific research and its applications to industry; (2) To enhance the capabilities of important research programs at the Laboratory; (3) To define a new way of collaboration between computer science and industrially-relevant research.« less

  14. Software for Acoustic Rendering

    NASA Technical Reports Server (NTRS)

    Miller, Joel D.

    2003-01-01

    SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.

  15. A Facility and Architecture for Autonomy Research

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.

  16. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  17. Fluid-Structure Interaction Analysis of Papillary Muscle Forces Using a Comprehensive Mitral Valve Model with 3D Chordal Structure.

    PubMed

    Toma, Milan; Jensen, Morten Ø; Einstein, Daniel R; Yoganathan, Ajit P; Cochran, Richard P; Kunzelman, Karyn S

    2016-04-01

    Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves were mounted in an in vitro setup, and structural data for the mitral valve was acquired with [Formula: see text]CT. Experimental data from the in vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed leaflet dynamics, and force vectors from the in vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements enable validating and adjusting material parameters to improve the accuracy of computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.

  18. Toothguide Trainer tests with color vision deficiency simulation monitor.

    PubMed

    Borbély, Judit; Varsányi, Balázs; Fejérdy, Pál; Hermann, Péter; Jakstat, Holger A

    2010-01-01

    The aim of this study was to evaluate whether simulated severe red and green color vision deficiency (CVD) influenced color matching results and to investigate whether training with Toothguide Trainer (TT) computer program enabled better color matching results. A total of 31 color normal dental students participated in the study. Every participant had to pass the Ishihara Test. Participants with a red/green color vision deficiency were excluded. A lecture on tooth color matching was given, and individual training with TT was performed. To measure the individual tooth color matching results in normal and color deficient display modes, the TT final exam was displayed on a calibrated monitor that served as a hardware-based method of simulating protanopy and deuteranopy. Data from the TT final exams were collected in normal and in severe red and green CVD-simulating monitor display modes. Color difference values for each participant in each display mode were computed (∑ΔE(ab)(*)), and the respective means and standard deviations were calculated. The Student's t-test was used in statistical evaluation. Participants made larger ΔE(ab)(*) errors in severe color vision deficient display modes than in the normal monitor mode. TT tests showed significant (p<0.05) difference in the tooth color matching results of severe green color vision deficiency simulation mode compared to normal vision mode. Students' shade matching results were significantly better after training (p=0.009). Computer-simulated severe color vision deficiency mode resulted in significantly worse color matching quality compared to normal color vision mode. Toothguide Trainer computer program improved color matching results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Variance decomposition in stochastic simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less

  20. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos

    2013-09-05

    The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less

  1. Investigating the impact of the cielo cray XE6 architecture on scientific application codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke

    2010-12-01

    Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less

  2. CAD-centric Computation Management System for a Virtual TBM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakanth Munipalli; K.Y. Szema; P.Y. Huang

    HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of themore » analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Haack, Jereme N.

    “Gamification”, the application of gameplay to real-world problems, enables the development of human computation systems that support decision-making through the integration of social and machine intelligence. One of gamification’s major benefits includes the creation of a problem solving environment where the influence of cognitive and cultural biases on human judgment can be curtailed through collaborative and competitive reasoning. By reducing biases on human judgment, gamification allows human computation systems to exploit human creativity relatively unhindered by human error. Operationally, gamification uses simulation to harvest human behavioral data that provide valuable insights for the solution of real-world problems.

  4. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

    PubMed Central

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872

  5. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.

    PubMed

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

  6. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatestmore » number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.« less

  7. Towards an integrative computational model for simulating tumor growth and response to radiation therapy

    NASA Astrophysics Data System (ADS)

    Marrero, Carlos Sosa; Aubert, Vivien; Ciferri, Nicolas; Hernández, Alfredo; de Crevoisier, Renaud; Acosta, Oscar

    2017-11-01

    Understanding the response to irradiation in cancer radiotherapy (RT) may help devising new strategies with improved tumor local control. Computational models may allow to unravel the underlying radiosensitive mechanisms intervening in the dose-response relationship. By using extensive simulations a wide range of parameters may be evaluated providing insights on tumor response thus generating useful data to plan modified treatments. We propose in this paper a computational model of tumor growth and radiation response which allows to simulate a whole RT protocol. Proliferation of tumor cells, cell life-cycle, oxygen diffusion, radiosensitivity, RT response and resorption of killed cells were implemented in a multiscale framework. The model was developed in C++, using the Multi-formalism Modeling and Simulation Library (M2SL). Radiosensitivity parameters extracted from literature enabled us to simulate in a regular grid (voxel-wise) a prostate cell tissue. Histopathological specimens with different aggressiveness levels extracted from patients after prostatectomy were used to initialize in silico simulations. Results on tumor growth exhibit a good agreement with data from in vitro studies. Moreover, standard fractionation of 2 Gy/fraction, with a total dose of 80 Gy as a real RT treatment was applied with varying radiosensitivity and oxygen diffusion parameters. As expected, the high influence of these parameters was observed by measuring the percentage of survival tumor cell after RT. This work paves the way to further models allowing to simulate increased doses in modified hypofractionated schemes and to develop new patient-specific combined therapies.

  8. Using Discrete Event Simulation to predict KPI's at a Projected Emergency Room.

    PubMed

    Concha, Pablo; Neriz, Liliana; Parada, Danilo; Ramis, Francisco

    2015-01-01

    Discrete Event Simulation (DES) is a powerful factor in the design of clinical facilities. DES enables facilities to be built or adapted to achieve the expected Key Performance Indicators (KPI's) such as average waiting times according to acuity, average stay times and others. Our computational model was built and validated using expert judgment and supporting statistical data. One scenario studied resulted in a 50% decrease in the average cycle time of patients compared to the original model, mainly by modifying the patient's attention model.

  9. An open-source job management framework for parameter-space exploration: OACIS

    NASA Astrophysics Data System (ADS)

    Murase, Y.; Uchitane, T.; Ito, N.

    2017-11-01

    We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.

  10. The effect of radiation on the long term productivity of a plant based CELSS

    NASA Technical Reports Server (NTRS)

    Thompson, B. G.; Lake, B. H.

    1987-01-01

    Mutations occur at a higher rate in space than under terrestrial conditions, primarily due to an increase in radiation levels. These mutations may effect the productivity of plants found in a controlled ecological life support system (CELSS). Computer simulations of plants with different ploidies, modes of reproduction, lethality thresholds, viability thresholds and susceptibilities to radiation induced mutations were performed under space normal and solar flare conditions. These simulations identified plant characteristics that would enable plants to retain high productivities over time in a CELSS.

  11. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    NASA Astrophysics Data System (ADS)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  12. Development of the Glenn-Heat-Transfer (Glenn-HT) Computer Code to Enable Time-Filtered Navier Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.

  13. Development of the Glenn Heat-Transfer (Glenn-HT) Computer Code to Enable Time-Filtered Navier-Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.

  14. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  15. Propulsion System Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Tai, Jimmy C. M.; McClure, Erin K.; Mavris, Dimitri N.; Burg, Cecile

    2002-01-01

    The Aerospace Systems Design Laboratory at the School of Aerospace Engineering in Georgia Institute of Technology has developed a core competency that enables propulsion technology managers to make technology investment decisions substantiated by propulsion and airframe technology system studies. This method assists the designer/manager in selecting appropriate technology concepts while accounting for the presence of risk and uncertainty as well as interactions between disciplines. This capability is incorporated into a single design simulation system that is described in this paper. This propulsion system design environment is created with a commercially available software called iSIGHT, which is a generic computational framework, and with analysis programs for engine cycle, engine flowpath, mission, and economic analyses. iSIGHT is used to integrate these analysis tools within a single computer platform and facilitate information transfer amongst the various codes. The resulting modeling and simulation (M&S) environment in conjunction with the response surface method provides the designer/decision-maker an analytical means to examine the entire design space from either a subsystem and/or system perspective. The results of this paper will enable managers to analytically play what-if games to gain insight in to the benefits (and/or degradation) of changing engine cycle design parameters. Furthermore, the propulsion design space will be explored probabilistically to show the feasibility and viability of the propulsion system integrated with a vehicle.

  16. A Computational Approach for Modeling Neutron Scattering Data from Lipid Bilayers

    DOE PAGES

    Carrillo, Jan-Michael Y.; Katsaras, John; Sumpter, Bobby G.; ...

    2017-01-12

    Biological cell membranes are responsible for a range of structural and dynamical phenomena crucial to a cell's well-being and its associated functions. Due to the complexity of cell membranes, lipid bilayer systems are often used as biomimetic models. These systems have led to signficant insights into vital membrane phenomena such as domain formation, passive permeation and protein insertion. Experimental observations of membrane structure and dynamics are, however, limited in resolution, both spatially and temporally. Importantly, computer simulations are starting to play a more prominent role in interpreting experimental results, enabling a molecular under- standing of lipid membranes. Particularly, the synergymore » between scattering experiments and simulations offers opportunities for new discoveries in membrane physics, as the length and time scales probed by molecular dynamics (MD) simulations parallel those of experiments. We also describe a coarse-grained MD simulation approach that mimics neutron scattering data from large unilamellar lipid vesicles over a range of bilayer rigidity. Specfically, we simulate vesicle form factors and membrane thickness fluctuations determined from small angle neutron scattering (SANS) and neutron spin echo (NSE) experiments, respectively. Our simulations accurately reproduce trends from experiments and lay the groundwork for investigations of more complex membrane systems.« less

  17. Real-time simulation of contact and cutting of heterogeneous soft-tissues.

    PubMed

    Courtecuisse, Hadrien; Allard, Jérémie; Kerfriden, Pierre; Bordas, Stéphane P A; Cotin, Stéphane; Duriez, Christian

    2014-02-01

    This paper presents a numerical method for interactive (real-time) simulations, which considerably improves the accuracy of the response of heterogeneous soft-tissue models undergoing contact, cutting and other topological changes. We provide an integrated methodology able to deal both with the ill-conditioning issues associated with material heterogeneities, contact boundary conditions which are one of the main sources of inaccuracies, and cutting which is one of the most challenging issues in interactive simulations. Our approach is based on an implicit time integration of a non-linear finite element model. To enable real-time computations, we propose a new preconditioning technique, based on an asynchronous update at low frequency. The preconditioner is not only used to improve the computation of the deformation of the tissues, but also to simulate the contact response of homogeneous and heterogeneous bodies with the same accuracy. We also address the problem of cutting the heterogeneous structures and propose a method to update the preconditioner according to the topological modifications. Finally, we apply our approach to three challenging demonstrators: (i) a simulation of cataract surgery (ii) a simulation of laparoscopic hepatectomy (iii) a brain tumor surgery. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Nonlinear dynamic simulation of single- and multi-spool core engines

    NASA Technical Reports Server (NTRS)

    Schobeiri, T.; Lippke, C.; Abouelkheir, M.

    1993-01-01

    In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.

  19. Exploiting MIC architectures for the simulation of channeling of charged particles in crystals

    NASA Astrophysics Data System (ADS)

    Bagli, Enrico; Karpusenko, Vadim

    2016-08-01

    Coherent effects of ultra-relativistic particles in crystals is an area of science under development. DYNECHARM + + is a toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures. The particle trajectory in a crystal is computed through numerical integration of the equation of motion. The code was revised and improved in order to exploit parallelization on multi-cores and vectorization of single instructions on multiple data. An Intel Xeon Phi card was adopted for the performance measurements. The computation time was proved to scale linearly as a function of the number of physical and virtual cores. By enabling the auto-vectorization flag of the compiler a three time speedup was obtained. The performances of the card were compared to the Dual Xeon ones.

  20. Ionosphere research with a HF/MF cubesat radio instrument

    NASA Astrophysics Data System (ADS)

    Kallio, Esa; Aikio, Anita; Alho, Markku; Fontell, Mathias; Harri, Ari-Matti; Kauristie, Kirsti; Kestilä, Antti; Koskimaa, Petri; Mäkelä, Jakke; Mäkelä, Miika; Turunen, Esa; Vanhamäki, Heikki; Verronen, Pekka

    2017-04-01

    New technology provides new possibilities to study geospace and 3D ionosphere by using spacecraft and computer simulations. A type of nanosatellites, CubeSats, provide a cost effective possibility to provide in-situ measurements in the ionosphere. Moreover, combined CubeSat observations with ground-based observations gives a new view on auroras and associated electromagnetic phenomena. Especially joint and active CubeSat - ground based observation campaigns enable the possibility of studying the 3D structure of the ionosphere. Furthermore using several CubeSats to form satellite constellations enables much higher temporal resolution. At the same time, increasing computation capacity has made it possible to perform simulations where properties of the ionosphere, such as propagation of the electromagnetic waves in the medium frequency, MF (0.3-3 MHz) and high frequency, HF (3-30 MHz), ranges is based on a 3D ionospheric model and on first-principles modelling. Electromagnetic waves at those frequencies are strongly affected by ionospheric electrons and, consequently, those frequencies can be used for studying the plasma. On the other hand, even if the ionosphere originally enables long-range telecommunication at MF and HF frequencies, the frequent occurrence of spatiotemporal variations in the ionosphere disturbs communication channels, especially at high latitudes. Therefore, study of the MF and HF waves in the ionosphere has both a strong science and technology interests. We introduce recently developed simulation models as well as measuring principles and techniques to investigate the arctic ionosphere by a polar orbiting CubeSat whose novel AM radio instrument measures HF and MF waves. The cubesat, which contains also a white light aurora camera, is planned to be launched in late 2017 (http://www.suomi100satelliitti.fi/eng). The new models are (1) a 3D ray tracing model and (2) a 3D full kinetic electromagnetic simulation. We also introduce how combining of the cubesat measurements to ground based measurements provides new research possibilities to study 3D ionosphere.

  1. Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model.

    PubMed

    Liu, Fang; Velikina, Julia V; Block, Walter F; Kijowski, Richard; Samsonov, Alexey A

    2017-02-01

    We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexible representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplified treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed  ∼ 200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure.

  2. Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model

    PubMed Central

    Velikina, Julia V.; Block, Walter F.; Kijowski, Richard; Samsonov, Alexey A.

    2017-01-01

    We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexibl representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplifie treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure. PMID:28113746

  3. Following the Ions through a Mass Spectrometer with Atmospheric Pressure Interface: Simulation of Complete Ion Trajectories from Ion Source to Mass Analyzer.

    PubMed

    Zhou, Xiaoyu; Ouyang, Zheng

    2016-07-19

    Ion trajectory simulation is an important and useful tool in instrumentation development for mass spectrometry. Accurate simulation of the ion motion through the mass spectrometer with atmospheric pressure ionization source has been extremely challenging, due to the complexity in gas hydrodynamic flow field across a wide pressure range as well as the computational burden. In this study, we developed a method of generating the gas flow field for an entire mass spectrometer with an atmospheric pressure interface. In combination with the electric force, for the first time simulation of ion trajectories from an atmospheric pressure ion source to a mass analyzer in vacuum has been enabled. A stage-by-stage ion repopulation method has also been implemented for the simulation, which helped to avoid an intolerable computational burden for simulations at high pressure regions while it allowed statistically meaningful results obtained for the mass analyzer. It has been demonstrated to be suitable to identify a joint point for combining the high and low pressure fields solved individually. Experimental characterization has also been done to validate the new method for simulation. Good agreement was obtained between simulated and experimental results for ion transfer though an atmospheric pressure interface with a curtain gas.

  4. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures.

    PubMed

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.

  5. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures

    PubMed Central

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results. PMID:29765315

  6. Simplified energy-balance model for pragmatic multi-dimensional device simulation

    NASA Astrophysics Data System (ADS)

    Chang, Duckhyun; Fossum, Jerry G.

    1997-11-01

    To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.

  7. Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system

    NASA Astrophysics Data System (ADS)

    Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong

    2011-06-01

    Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.

  8. Creating A Data Base For Design Of An Impeller

    NASA Technical Reports Server (NTRS)

    Prueger, George H.; Chen, Wei-Chung

    1993-01-01

    Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.

  9. Overcoming spatio-temporal limitations using dynamically scaled in vitro PC-MRI - A flow field comparison to true-scale computer simulations of idealized, stented and patient-specific left main bifurcations.

    PubMed

    Beier, Susann; Ormiston, John; Webster, Mark; Cater, John; Norris, Stuart; Medrano-Gracia, Pau; Young, Alistair; Gilbert, Kathleen; Cowan, Brett

    2016-08-01

    The majority of patients with angina or heart failure have coronary artery disease. Left main bifurcations are particularly susceptible to pathological narrowing. Flow is a major factor of atheroma development, but limitations in imaging technology such as spatio-temporal resolution, signal-to-noise ratio (SNRv), and imaging artefacts prevent in vivo investigations. Computational fluid dynamics (CFD) modelling is a common numerical approach to study flow, but it requires a cautious and rigorous application for meaningful results. Left main bifurcation angles of 40°, 80° and 110° were found to represent the spread of an atlas based 100 computed tomography angiograms. Three left mains with these bifurcation angles were reconstructed with 1) idealized, 2) stented, and 3) patient-specific geometry. These were then approximately 7× scaled-up and 3D printing as large phantoms. Their flow was reproduced using a blood-analogous, dynamically scaled steady flow circuit, enabling in vitro phase-contrast magnetic resonance (PC-MRI) measurements. After threshold segmentation the image data was registered to true-scale CFD of the same coronary geometry using a coherent point drift algorithm, yielding a small covariance error (σ 2 <;5.8×10 -4 ). Natural-neighbour interpolation of the CFD data onto the PC-MRI grid enabled direct flow field comparison, showing very good agreement in magnitude (error 2-12%) and directional changes (r 2 0.87-0.91), and stent induced flow alternations were measureable for the first time. PC-MRI over-estimated velocities close to the wall, possibly due to partial voluming. Bifurcation shape determined the development of slow flow regions, which created lower SNRv regions and increased discrepancies. These can likely be minimised in future by testing different similarity parameters to reduce acquisition error and improve correlation further. It was demonstrated that in vitro large phantom acquisition correlates to true-scale coronary flow simulations when dynamically scaled, and thus can overcome current PC-MRI's spatio-temporal limitations. This novel method enables experimental assessment of stent induced flow alternations, and in future may elevate CFD coronary flow simulations by providing sophisticated boundary conditions, and enable investigations of stenosis phantoms.

  10. Simulation of the concomitant process of nucleation-growth-coarsening of Al2Cu particles in a 319 foundry aluminum alloy

    NASA Astrophysics Data System (ADS)

    Martinez, R.; Larouche, D.; Cailletaud, G.; Guillot, I.; Massinon, D.

    2015-06-01

    The precipitation of Al2Cu particles in a 319 T7 aluminum alloy has been modeled. A theoretical approach enables the concomitant computation of nucleation, growth and coarsening. The framework is based on an implicit scheme using the finite differences. The equation of continuity is discretized in time and space in order to obtain a matricial form. The inversion of a tridiagonal matrix gives way to determining the evolution of the size distribution of Al2Cu particles at t  +Δt. The fluxes of in-between the boundaries are computed in order to respect the conservation of the mass of the system, as well as the fluxes at the boundaries. The essential results of the model are compared to TEM measurements. Simulations provide quantitative features on the impact of the cooling rate on the size distribution of particles. They also provide results in agreement with the TEM measurements. This kind of multiscale approach allows new perspectives to be examined in the process of designing highly loaded components such as cylinder heads. It enables a more precise prediction of the microstructure and its evolution as a function of continuous cooling rates.

  11. Development of capability for microtopography-resolving simulations of hydrologic processes in permafrost affected regions

    NASA Astrophysics Data System (ADS)

    Painter, S.; Moulton, J. D.; Berndt, M.; Coon, E.; Garimella, R.; Lewis, K. C.; Manzini, G.; Mishra, P.; Travis, B. J.; Wilson, C. J.

    2012-12-01

    The frozen soils of the Arctic and subarctic regions contain vast amounts of stored organic carbon. This carbon is vulnerable to release to the atmosphere as temperatures warm and permafrost degrades. Understanding the response of the subsurface and surface hydrologic system to degrading permafrost is key to understanding the rate, timing, and chemical form of potential carbon releases to the atmosphere. Simulating the hydrologic system in degrading permafrost regions is challenging because of the potential for topographic evolution and associated drainage network reorganization as permafrost thaws and massive ground ice melts. The critical process models required for simulating hydrology include subsurface thermal hydrology of freezing/thawing soils, thermal processes within ice wedges, mechanical deformation processes, overland flow, and surface energy balances including snow dynamics. A new simulation tool, the Arctic Terrestrial Simulator (ATS), is being developed to simulate these coupled processes. The computational infrastructure must accommodate fully unstructured grids that track evolving topography, allow accurate solutions on distorted grids, provide robust and efficient solutions on highly parallel computer architectures, and enable flexibility in the strategies for coupling among the various processes. The ATS is based on Amanzi (Moulton et al. 2012), an object-oriented multi-process simulator written in C++ that provides much of the necessary computational infrastructure. Status and plans for the ATS including major hydrologic process models and validation strategies will be presented. Highly parallel simulations of overland flow using high-resolution digital elevation maps of polygonal patterned ground landscapes demonstrate the feasibility of the approach. Simulations coupling three-phase subsurface thermal hydrology with a simple thaw-induced subsidence model illustrate the strong feedbacks among the processes. D. Moulton, M. Berndt, M. Day, J. Meza, et al., High-Level Design of Amanzi, the Multi-Process High Performance Computing Simulator, Technical Report ASCEM-HPC-2011-03-1, DOE Environmental Management, 2012.

  12. Computational Flow Modeling of Human Upper Airway Breathing

    NASA Astrophysics Data System (ADS)

    Mylavarapu, Goutham

    Computational modeling of biological systems have gained a lot of interest in biomedical research, in the recent past. This thesis focuses on the application of computational simulations to study airflow dynamics in human upper respiratory tract. With advancements in medical imaging, patient specific geometries of anatomically accurate respiratory tracts can now be reconstructed from Magnetic Resonance Images (MRI) or Computed Tomography (CT) scans, with better and accurate details than traditional cadaver cast models. Computational studies using these individualized geometrical models have advantages of non-invasiveness, ease, minimum patient interaction, improved accuracy over experimental and clinical studies. Numerical simulations can provide detailed flow fields including velocities, flow rates, airway wall pressure, shear stresses, turbulence in an airway. Interpretation of these physical quantities will enable to develop efficient treatment procedures, medical devices, targeted drug delivery etc. The hypothesis for this research is that computational modeling can predict the outcomes of a surgical intervention or a treatment plan prior to its application and will guide the physician in providing better treatment to the patients. In the current work, three different computational approaches Computational Fluid Dynamics (CFD), Flow-Structure Interaction (FSI) and Particle Flow simulations were used to investigate flow in airway geometries. CFD approach assumes airway wall as rigid, and relatively easy to simulate, compared to the more challenging FSI approach, where interactions of airway wall deformations with flow are also accounted. The CFD methodology using different turbulence models is validated against experimental measurements in an airway phantom. Two case-studies using CFD, to quantify a pre and post-operative airway and another, to perform virtual surgery to determine the best possible surgery in a constricted airway is demonstrated. The unsteady Large Eddy simulations (LES) and a steady Reynolds Averaged Navier Stokes (RANS) approaches in CFD modeling are discussed. The more challenging FSI approach is modeled first in simple two-dimensional anatomical geometry and then extended to simplified three dimensional geometry and finally in three dimensionally accurate geometries. The concepts of virtual surgery and the differences to CFD are discussed. Finally, the influence of various drug delivery parameters on particle deposition efficiency in airway anatomy are investigated through particle-flow simulations in a nasal airway model.

  13. Adiabatic topological quantum computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cesare, Chris; Landahl, Andrew J.; Bacon, Dave

    Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev’s surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computationmore » size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.« less

  14. Adiabatic topological quantum computing

    DOE PAGES

    Cesare, Chris; Landahl, Andrew J.; Bacon, Dave; ...

    2015-07-31

    Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev’s surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computationmore » size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.« less

  15. Open-Source Software in Computational Research: A Case Study

    DOE PAGES

    Syamlal, Madhava; O'Brien, Thomas J.; Benyahia, Sofiane; ...

    2008-01-01

    A case study of open-source (OS) development of the computational research software MFIX, used for multiphase computational fluid dynamics simulations, is presented here. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow, as well as the dissemination of information to other areas such as geophysical and volcanology research, is demonstrated. This study shows that the advantages of OS development were realized inmore » the case of MFIX: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; the facilitation of peer review of the results of computational research.« less

  16. Fluid Structure Interaction in a Turbine Blade

    NASA Technical Reports Server (NTRS)

    Gorla, Rama S. R.

    2004-01-01

    An unsteady, three dimensional Navier-Stokes solution in rotating frame formulation for turbomachinery applications is presented. Casting the governing equations in a rotating frame enabled the freezing of grid motion and resulted in substantial savings in computer time. The turbine blade was computationally simulated and probabilistically evaluated in view of several uncertainties in the aerodynamic, structural, material and thermal variables that govern the turbine blade. The interconnection between the computational fluid dynamics code and finite element structural analysis code was necessary to couple the thermal profiles with the structural design. The stresses and their variations were evaluated at critical points on the Turbine blade. Cumulative distribution functions and sensitivity factors were computed for stress responses due to aerodynamic, geometric, mechanical and thermal random variables.

  17. Enabling Earth Science: The Facilities and People of the NCCS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.

  18. Expanding the catalog of binary black-hole simulations: aligned-spin configurations

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; SXS Collaboration

    2015-04-01

    A major goal of numerical relativity is to model the inspiral and merger of binary black holes through sufficiently accurate and long simulations, to enable the successful detection of gravitational waves. However, covering the full parameter space of binary configurations is a computationally daunting task. The SXS Collaboration has made important progress in this direction recently, with a catalog of 174 publicly available binary black-hole simulations [black-holes.org/waveforms]. Nevertheless, the parameter-space coverage remains sparse, even for non-precessing binaries. In this talk, I will describe an addition to the SXS catalog to improve its coverage, consisting of 95 new simulations of aligned-spin binaries with moderate mass ratios and dimensionless spins as high as 0.9. Some applications of these new simulations will also be mentioned.

  19. Haptic interface of the KAIST-Ewha colonoscopy simulator II.

    PubMed

    Woo, Hyun Soo; Kim, Woo Seok; Ahn, Woojin; Lee, Doo Yong; Yi, Sun Young

    2008-11-01

    This paper presents an improved haptic interface for the Korea Advanced Institute of Science and Technology Ewha Colonoscopy Simulator II. The haptic interface enables the distal portion of the colonoscope to be freely bent while guaranteeing sufficient workspace and reflective forces for colonoscopy simulation. Its force-torque sensor measures the profiles of the user. Manipulation of the colonoscope tip is monitored by four deflection sensors and triggers computations to render accurate graphic images corresponding to the rotation of the angle knob. Tack sensors are attached to the valve-actuation buttons of the colonoscope to simulate air injection or suction as well as the corresponding deformation of the colon. A survey study for face validation was conducted, and the result shows that the developed haptic interface provides realistic haptic feedback for colonoscopy simulations.

  20. A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1997-01-01

    This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.

Top