DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Jin, Shuangshuang; Chen, Yousu
This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less
NASA Technical Reports Server (NTRS)
Walker, Carrie K.
1991-01-01
A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.
Computer-aided design of the RF-cavity for a high-power S-band klystron
NASA Astrophysics Data System (ADS)
Kant, D.; Bandyopadhyay, A. K.; Pal, D.; Meena, R.; Nangru, S. C.; Joshi, L. M.
2012-08-01
This article describes the computer-aided design of the RF-cavity for a S-band klystron operating at 2856 MHz. State-of-the-art electromagnetic simulation tools SUPERFISH, CST Microwave studio, HFSS and MAGIC have been used for cavity design. After finalising the geometrical details of the cavity through simulation, it has been fabricated and characterised through cold testing. Detailed results of the computer-aided simulation and cold measurements are presented in this article.
Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Shahab, Azin
In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.
Use of Computer Simulation for the Analysis of Railroad Operations in the St. Louis Terminal Area
DOT National Transportation Integrated Search
1977-11-01
This report discusses the computer simulation methodology, its uses and limitations, and its applicability to the analysis of alternative railroad terminal restructuring plans. Included is a detailed discussion of the AAR Simulation System, an overvi...
Computational composite mechanics for aerospace propulsion structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial fabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating (1) complex composite structural behavior in general and (2) specific aerospace propulsion structural components in particular.
Computational composite mechanics for aerospace propulsion structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1987-01-01
Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial frabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating: (1) complex composite structural behavior in general, and (2) specific aerospace propulsion structural components in particular.
Effects of Geometric Details on Slat Noise Generation and Propagation
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Lockard, David P.
2009-01-01
The relevance of geometric details to the generation and propagation of noise from leading-edge slats is considered. Typically, such details are omitted in computational simulations and model-scale experiments thereby creating ambiguities in comparisons with acoustic results from flight tests. The current study uses two-dimensional, computational simulations in conjunction with a Ffowcs Williams-Hawkings (FW-H) solver to investigate the effects of previously neglected slat "bulb" and "blade" seals on the local flow field and the associated acoustic radiation. The computations show that the presence of the "blade" seal at the cusp in the simulated geometry significantly changes the slat cove flow dynamics, reduces the amplitudes of the radiated sound, and to a lesser extent, alters the directivity beneath the airfoil. Furthermore, the computations suggest that a modest extension of the baseline "blade" seal further enhances the suppression of slat noise. As a side issue, the utility and equivalence of FW-H methodology for calculating far-field noise as opposed to a more direct approach is examined and demonstrated.
"Intelligent" Computer Assisted Instruction (CAI) Applications. Interim Report.
ERIC Educational Resources Information Center
Brown, John Seely; And Others
Interim work is documented describing efforts to modify computer techniques used to recognize and process English language requests to an instructional simulator. The conversion from a hand-coded to a table driven technique are described in detail. Other modifications to a simulation based computer assisted instruction program to allow a gaming…
2000 Numerical Propulsion System Simulation Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Greg; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac
2001-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective. high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA'S Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 1999 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2000 by the High Performance Computing and Communications Program.
2001 Numerical Propulsion System Simulation Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Gregory; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac
2002-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA's Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 2000 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2001 by the High Performance Computing and Communications Program.
Numerical Simulations of Single Flow Element in a Nuclear Thermal Thrust Chamber
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed and global thermo-fluid environments of a single now element in a hypothetical solid-core nuclear thermal thrust chamber assembly, Several numerical and multi-physics thermo-fluid models, such as chemical reactions, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver. The numerical simulations of a single now element provide a detailed thermo-fluid environment for thermal stress estimation and insight for possible occurrence of mid-section corrosion. In addition, detailed conjugate heat transfer simulations were employed to develop the porosity models for efficient pressure drop and thermal load calculations.
A new paradigm for atomically detailed simulations of kinetics in biophysical systems.
Elber, Ron
2017-01-01
The kinetics of biochemical and biophysical events determined the course of life processes and attracted considerable interest and research. For example, modeling of biological networks and cellular responses relies on the availability of information on rate coefficients. Atomically detailed simulations hold the promise of supplementing experimental data to obtain a more complete kinetic picture. However, simulations at biological time scales are challenging. Typical computer resources are insufficient to provide the ensemble of trajectories at the correct length that is required for straightforward calculations of time scales. In the last years, new technologies emerged that make atomically detailed simulations of rate coefficients possible. Instead of computing complete trajectories from reactants to products, these approaches launch a large number of short trajectories at different positions. Since the trajectories are short, they are computed trivially in parallel on modern computer architecture. The starting and termination positions of the short trajectories are chosen, following statistical mechanics theory, to enhance efficiency. These trajectories are analyzed. The analysis produces accurate estimates of time scales as long as hours. The theory of Milestoning that exploits the use of short trajectories is discussed, and several applications are described.
Numerical Propulsion System Simulation (NPSS) 1999 Industry Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Greg; Naiman, Cynthia; Evans, Austin
2000-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. In addition, the paper contains a summary of the feedback received from industry partners in the development effort and the actions taken over the past year to respond to that feedback. The NPSS development was supported in FY99 by the High Performance Computing and Communications Program.
Power combining in an array of microwave power rectifiers
NASA Technical Reports Server (NTRS)
Gutmann, R. J.; Borrego, J. M.
1979-01-01
This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.
1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David; Agarwal, Deborah A.; Sun, Xin
2011-09-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Agarwal, D.; Sun, X.
2011-01-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System
James Menart
2013-06-07
This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.
A computational workflow for designing silicon donor qubits
Humble, Travis S.; Ericson, M. Nance; Jakowski, Jacek; ...
2016-09-19
Developing devices that can reliably and accurately demonstrate the principles of superposition and entanglement is an on-going challenge for the quantum computing community. Modeling and simulation offer attractive means of testing early device designs and establishing expectations for operational performance. However, the complex integrated material systems required by quantum device designs are not captured by any single existing computational modeling method. We examine the development and analysis of a multi-staged computational workflow that can be used to design and characterize silicon donor qubit systems with modeling and simulation. Our approach integrates quantum chemistry calculations with electrostatic field solvers to performmore » detailed simulations of a phosphorus dopant in silicon. We show how atomistic details can be synthesized into an operational model for the logical gates that define quantum computation in this particular technology. In conclusion, the resulting computational workflow realizes a design tool for silicon donor qubits that can help verify and validate current and near-term experimental devices.« less
Simulation and control of a 20 kHz spacecraft power system
NASA Technical Reports Server (NTRS)
Wasynczuk, O.; Krause, P. C.
1988-01-01
A detailed computer representation of four Mapham inverters connected in a series, parallel arrangement has been implemented. System performance is illustrated by computer traces for the four Mapham inverters connected to a Litz cable with parallel resistance and dc receiver loads at the receiving end of the transmission cable. Methods of voltage control and load sharing between the inverters are demonstrated. Also, the detailed computer representation is used to design and to demonstrate the advantages of a feed-forward voltage control strategy. It is illustrated that with a computer simulation of this type, the performance and control of spacecraft power systems may be investigated with relative ease and facility.
1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics. PMID:25120463
Computers for real time flight simulation: A market survey
NASA Technical Reports Server (NTRS)
Bekey, G. A.; Karplus, W. J.
1977-01-01
An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Boldyrev, Stanislav; Fischer, Paul
This report details the impact exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the DOE applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought together experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.
NASA Technical Reports Server (NTRS)
Marvin, J. G.; Horstman, C. C.; Rubesin, M. W.; Coakley, T. J.; Kussoy, M. I.
1975-01-01
An experiment designed to test and guide computations of the interaction of an impinging shock wave with a turbulent boundary layer is described. Detailed mean flow-field and surface data are presented for two shock strengths which resulted in attached and separated flows, respectively. Numerical computations, employing the complete time-averaged Navier-Stokes equations along with algebraic eddy-viscosity and turbulent Prandtl number models to describe shear stress and heat flux, are used to illustrate the dependence of the computations on the particulars of the turbulence models. Models appropriate for zero-pressure-gradient flows predicted the overall features of the flow fields, but were deficient in predicting many of the details of the interaction regions. Improvements to the turbulence model parameters were sought through a combination of detailed data analysis and computer simulations which tested the sensitivity of the solutions to model parameter changes. Computer simulations using these improvements are presented and discussed.
Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.
Validation of the solar heating and cooling high speed performance (HISPER) computer code
NASA Technical Reports Server (NTRS)
Wallace, D. B.
1980-01-01
Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
Introducing Computer Simulation into the High School: An Applied Mathematics Curriculum.
ERIC Educational Resources Information Center
Roberts, Nancy
1981-01-01
A programing language called DYNAMO, developed especially for writing simulation models, is promoted. Details of six, self-teaching curriculum packages recently developed for simulation-oriented instruction are provided. (MP)
NASA Astrophysics Data System (ADS)
Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei
2018-02-01
The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
VCSEL Applications and Simulation
NASA Technical Reports Server (NTRS)
Cheung, Samson; Goorjian, Peter; Ning, Cun-Zheng; Li, Jian-Zhong
2000-01-01
This viewgraph presentation gives an overview of Vertical Cavity Surface Emitting Laser (VCSEL) simulation and its applications. Details are given on the optical interconnection in information technology of VCSEL, the formulation of the simulation, its numeric algorithm, and the computational results.
Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.
Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann
2015-01-01
Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.
Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.
1986-01-01
Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.
Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Choi, S. B.; Ibrahim, A.
2010-01-01
A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.
Numerical experiments in homogeneous turbulence
NASA Technical Reports Server (NTRS)
Rogallo, R. S.
1981-01-01
The direct simulation methods developed by Orszag and Patternson (1972) for isotropic turbulence were extended to homogeneous turbulence in an incompressible fluid subjected to uniform deformation or rotation. The results of simulations for irrotational strain (plane and axisymmetric), shear, rotation, and relaxation toward isotropy following axisymmetric strain are compared with linear theory and experimental data. Emphasis is placed on the shear flow because of its importance and because of the availability of accurate and detailed experimental data. The computed results are used to assess the accuracy of two popular models used in the closure of the Reynolds-stress equations. Data from a variety of the computed fields and the details of the numerical methods used in the simulation are also presented.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
Over the years, computer modeling has been used extensively in many disciplines to solve engineering problems. A set of computer program tools is proposed to assist the engineer in the various phases of the Space Station program from technology selection through flight operations. The development and application of emulation and simulation transient performance modeling tools for life support systems are examined. The results of the development and the demonstration of the utility of three computer models are presented. The first model is a detailed computer model (emulation) of a solid amine water desorbed (SAWD) CO2 removal subsystem combined with much less detailed models (simulations) of a cabin, crew, and heat exchangers. This model was used in parallel with the hardware design and test of this CO2 removal subsystem. The second model is a simulation of an air revitalization system combined with a wastewater processing system to demonstrate the capabilities to study subsystem integration. The third model is that of a Space Station total air revitalization system. The station configuration consists of a habitat module, a lab module, two crews, and four connecting nodes.
Software for Brain Network Simulations: A Comparative Study
Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.
2017-01-01
Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687
QCE: A Simulator for Quantum Computer Hardware
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; de Raedt, Hans
2003-09-01
The Quantum Computer Emulator (QCE) described in this paper consists of a simulator of a generic, general purpose quantum computer and a graphical user interface. The latter is used to control the simulator, to define the hardware of the quantum computer and to debug and execute quantum algorithms. QCE runs in a Windows 98/NT/2000/ME/XP environment. It can be used to validate designs of physically realizable quantum processors and as an interactive educational tool to learn about quantum computers and quantum algorithms. A detailed exposition is given of the implementation of the CNOT and the Toffoli gate, the quantum Fourier transform, Grover's database search algorithm, an order finding algorithm, Shor's algorithm, a three-input adder and a number partitioning algorithm. We also review the results of simulations of an NMR-like quantum computer.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between computational and measurement data in the bypass duct show that they are in good agreement, thus providing a partial validation of the computational results.
Sacco, Federica; Paun, Bruno; Lehmkuhl, Oriol; Iles, Tinen L; Iaizzo, Paul A; Houzeaux, Guillaume; Vázquez, Mariano; Butakoff, Constantine; Aguado-Sierra, Jazmin
2018-06-11
Computational modelling plays an important role in right ventricular (RV) haemodynamic analysis. However, current approaches employ smoothed ventricular anatomies. The aim of this study is to characterise RV haemodynamics including detailed endocardial structures like trabeculae, moderator band and papillary muscles (PMs). Four paired detailed and smoothed RV endocardium models (two male and two female) were reconstructed from ex-vivo human hearts high-resolution magnetic resonance images (MRI). Detailed models include structures with ≥1 mm 2 cross-sectional area. Haemodynamic characterisation was done by computational fluid dynamics (CFD) simulations with steady and transient inflows, using high performance computing (HPC). The differences between the flows in smoothed and detailed models were assessed using Q-criterion for vorticity quantification, the pressure drop between inlet and outlet, and the wall shear stress (WSS). Results demonstrated that detailed endocardial structures increase the degree of intra-ventricular pressure drop, decrease the WSS and disrupt the dominant vortex creating secondary small vortices. Increasingly turbulent blood flow was observed in the detailed RVs. Female RVs were less trabeculated and presented lower pressure drops than the males. In conclusion, neglecting endocardial structures in RV haemodynamic models may lead to inaccurate conclusions about the pressures, stresses, and blood flow behaviour in the cavity. This article is protected by copyright. All rights reserved.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Signal design study for shuttle/TDRSS Ku-band uplink
NASA Technical Reports Server (NTRS)
1976-01-01
The adequacy of the signal design approach chosen for the TDRSS/orbiter uplink was evaluated. Critical functions and/or components associated with the baseline design were identified, and design alternatives were developed for those areas considered high risk. A detailed set of RF and signal processing performance specifications for the orbiter hardware associated with the TDRSS/orbiter Ku band uplink was analyzed. Performances of a detailed design of the PN despreader, the PSK carrier synchronization loop, and the symbol synchronizer are identified. The performance of the downlink signal by means of computer simulation to obtain a realistic determination of bit error rate degradations was studied. The three channel PM downlink signal was detailed by means of analysis and computer simulation.
Improving a Computer Networks Course Using the Partov Simulation Engine
ERIC Educational Resources Information Center
Momeni, B.; Kharrazi, M.
2012-01-01
Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…
The Numerical Propulsion System Simulation: A Multidisciplinary Design System for Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Lytle, John K.
1999-01-01
Advances in computational technology and in physics-based modeling are making large scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze ma or propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of design process and to provide the designer with critical information about the components early in the design process. This paper describes the development of the Numerical Propulsion System Simulation (NPSS), a multidisciplinary system of analysis tools that is focussed on extending the simulation capability from components to the full system. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
Lean flammability limit of downward propagating hydrogen-air flames
NASA Technical Reports Server (NTRS)
Patnaik, G.; Kailasanath, K.
1992-01-01
Detailed multidimensional numerical simulations that include the effects of wall heat losses have been performed to study the dynamics of downward flame propagation and extinguishment in lean hydrogen-air mixtures. The computational results show that a downward propagating flame in an isothermal channel has a flammability limit of around 9.75 percent. This is in excellent agreement with experimental results. Also in excellent agreement are the detailed observations of the flame behavior at the point of extinguishment. The primary conclusion of this work is that detailed numerical simulations that include wall heat losses and the effect of gravity can adequately simulate the dynamics of the extinguishment process in downward-propagating hydrogen-air flames. These simulations can be examined in detail to gain understanding of the actual extinction process.
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Fogel, L. J.; Phelps, J. P.
1975-01-01
A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.
PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.
1998-01-01
PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.
Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.
ERIC Educational Resources Information Center
Scheeline, Alexander; Mork, Brian J.
1988-01-01
Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)
User's manual for a computer program for simulating intensively managed allowable cut.
Robert W. Sassaman; Ed Holt; Karl Bergsvik
1972-01-01
Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....
Electric/Hybrid Vehicle Simulation
NASA Technical Reports Server (NTRS)
Slusser, R. A.; Chapman, C. P.; Brennand, J. P.
1985-01-01
ELVEC computer program provides vehicle designer with simulation tool for detailed studies of electric and hybrid vehicle performance and cost. ELVEC simulates performance of user-specified electric or hybrid vehicle under user specified driving schedule profile or operating schedule. ELVEC performs vehicle design and life cycle cost analysis.
ADDRESSING ENVIRONMENTAL ENGINEERING CHALLENGES WITH COMPUTATIONAL FLUID DYNAMICS
This paper discusses the status and application of Computational Fluid Dynamics )CFD) models to address environmental engineering challenges for more detailed understanding of air pollutant source emissions, atmospheric dispersion and resulting human exposure. CFD simulations ...
Modeling Advance Life Support Systems
NASA Technical Reports Server (NTRS)
Pitts, Marvin; Sager, John; Loader, Coleen; Drysdale, Alan
1996-01-01
Activities this summer consisted of two projects that involved computer simulation of bioregenerative life support systems for space habitats. Students in the Space Life Science Training Program (SLSTP) used the simulation, space station, to learn about relationships between humans, fish, plants, and microorganisms in a closed environment. One student complete a six week project to modify the simulation by converting the microbes from anaerobic to aerobic, and then balancing the simulation's life support system. A detailed computer simulation of a closed lunar station using bioregenerative life support was attempted, but there was not enough known about system restraints and constants in plant growth, bioreactor design for space habitats and food preparation to develop an integrated model with any confidence. Instead of a completed detailed model with broad assumptions concerning the unknown system parameters, a framework for an integrated model was outlined and work begun on plant and bioreactor simulations. The NASA sponsors and the summer Fell were satisfied with the progress made during the 10 weeks, and we have planned future cooperative work.
Bistatic passive radar simulator with spatial filtering subsystem
NASA Astrophysics Data System (ADS)
Hossa, Robert; Szlachetko, Boguslaw; Lewandowski, Andrzej; Górski, Maksymilian
2009-06-01
The purpose of this paper is to briefly introduce the structure and features of the developed virtual passive FM radar implemented in Matlab system of numerical computations and to present many alternative ways of its performance. An idea of the proposed solution is based on analytic representation of transmitted direct signals and reflected echo signals. As a spatial filtering subsystem a beamforming network of ULA and UCA dipole configuration dedicated to bistatic radar concept is considered and computationally efficient procedures are presented in details. Finally, exemplary results of the computer simulations of the elaborated virtual simulator are provided and discussed.
Effects of Geometric Details on Slat Noise Generation and Propagation
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Lockard, David P.
2006-01-01
The relevance of geometric details to the generation and propagation of noise from leading-edge slats is considered. Typically, such details are omitted in computational simulations and model-scale experiments thereby creating ambiguities in comparisons with acoustic results from flight tests. The current study uses two-dimensional, computational simulations in conjunction with a Ffowcs Williams-Hawkings (FW-H) solver to investigate the effects of previously neglected slat "bulb" and "blade" seals on the local flow field and the associated acoustic radiation. The computations clearly show that the presence of the "blade" seal at the cusp significantly changes the slat cove flow dynamics, reduces the amplitudes of the radiated sound, and to a lesser extent, alters the directivity beneath the airfoil. Furthermore, it is demonstrated that a modest extension of the baseline "blade" seal further enhances the suppression of slat noise. As a side issue, the utility and equivalence of FW-H methodology for calculating far-field noise as opposed to a more direct approach is examined and demonstrated.
NASA Astrophysics Data System (ADS)
Shabliy, L. S.; Malov, D. V.; Bratchinin, D. S.
2018-01-01
In the article the description of technique for simulation of valves for pneumatic-hydraulic system of liquid-propellant rocket engine (LPRE) is given. Technique is based on approach of computational hydrodynamics (Computational Fluid Dynamics - CFD). The simulation of a differential valve used in closed circuit LPRE supply pipes of fuel components is performed to show technique abilities. A schematic and operation algorithm of this valve type is described in detail. Also assumptions made in the construction of the geometric model of the hydraulic path of the valve are described in detail. The calculation procedure for determining valve hydraulic characteristics is given. Based on these calculations certain hydraulic characteristics of the valve are given. Some ways of usage of the described simulation technique for research the static and dynamic characteristics of the elements of the pneumatic-hydraulic system of LPRE are proposed.
Host computer software specifications for a zero-g payload manhandling simulator
NASA Technical Reports Server (NTRS)
Wilson, S. W.
1986-01-01
The HP PASCAL source code was developed for the Mission Planning and Analysis Division (MPAD) of NASA/JSC, and takes the place of detailed flow charts defining the host computer software specifications for MANHANDLE, a digital/graphical simulator that can be used to analyze the dynamics of onorbit (zero-g) payload manhandling operations. Input and output data for representative test cases are contained.
Numerical Simulation Of Flow Through An Artificial Heart
NASA Technical Reports Server (NTRS)
Rogers, Stuart; Kutler, Paul; Kwak, Dochan; Kiris, Centin
1991-01-01
Research in both artificial hearts and fluid dynamics benefits from computational studies. Algorithm that implements Navier-Stokes equations of flow extended to simulate flow of viscous, incompressible blood through articifial heart. Ability to compute details of such flow important for two reasons: internal flows with moving boundaries of academic interest in their own right, and many of deficiencies of artificial hearts attributable to dynamics of flow.
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
[Computer simulation by passenger wound analysis of vehicle collision].
Zou, Dong-Hua; Liu, Nning-Guo; Shen, Jie; Zhang, Xiao-Yun; Jin, Xian-Long; Chen, Yi-Jiu
2006-08-15
To reconstruct the course of vehicle collision, so that to provide the reference for forensic identification and disposal of traffic accidents. Through analyzing evidences left both on passengers and vehicles, technique of momentum impulse combined with multi-dynamics was applied to simulate the motion and injury of passengers as well as the track of vehicles. Model of computer stimulation perfectly reconstructed phases of the traffic collision, which coincide with details found by forensic investigation. Computer stimulation is helpful and feasible for forensic identification in traffic accidents.
Trends in Programming Languages for Neuroscience Simulations
Davison, Andrew P.; Hines, Michael L.; Muller, Eilif
2009-01-01
Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing. PMID:20198154
Trends in programming languages for neuroscience simulations.
Davison, Andrew P; Hines, Michael L; Muller, Eilif
2009-01-01
Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing.
User's guide to resin infusion simulation program in the FORTRAN language
NASA Technical Reports Server (NTRS)
Weideman, Mark H.; Hammond, Vince H.; Loos, Alfred C.
1992-01-01
RTMCL is a user friendly computer code which simulates the manufacture of fabric composites by the resin infusion process. The computer code is based on the process simulation model described in reference 1. Included in the user's guide is a detailed step by step description of how to run the program and enter and modify the input data set. Sample input and output files are included along with an explanation of the results. Finally, a complete listing of the program is provided.
Computing in Secondary Physics at Armdale, W.A.
ERIC Educational Resources Information Center
Smith, Clifton L.
1976-01-01
An Australian secondary school physics course utilizing an electronic programmable calculator and computer is described. Calculation techniques and functions, programming techniques, and simulation of physical systems are detailed. A summary of student responses to the program is included. (BT)
NASA Astrophysics Data System (ADS)
Sanchez, Beatriz; Santiago, Jose Luis; Martilli, Alberto; Martin, Fernando; Borge, Rafael; Quaassdorff, Christina; de la Paz, David
2017-08-01
Air quality management requires more detailed studies about air pollution at urban and local scale over long periods of time. This work focuses on obtaining the spatial distribution of NOx concentration averaged over several days in a heavily trafficked urban area in Madrid (Spain) using a computational fluid dynamics (CFD) model. A methodology based on weighted average of CFD simulations is applied computing the time evolution of NOx dispersion as a sequence of steady-state scenarios taking into account the actual atmospheric conditions. The inputs of emissions are estimated from the traffic emission model and the meteorological information used is derived from a mesoscale model. Finally, the computed concentration map correlates well with 72 passive samplers deployed in the research area. This work reveals the potential of using urban mesoscale simulations together with detailed traffic emissions so as to provide accurate maps of pollutant concentration at microscale using CFD simulations.
A computational approach for coupled 1D and 2D/3D CFD modelling of pulse Tube cryocoolers
NASA Astrophysics Data System (ADS)
Fang, T.; Spoor, P. S.; Ghiaasiaan, S. M.
2017-12-01
The physics behind Stirling-type cryocoolers are complicated. One dimensional (1D) simulation tools offer limited details and accuracy, in particular for cryocoolers that have non-linear configurations. Multi-dimensional Computational Fluid Dynamic (CFD) methods are useful but are computationally expensive in simulating cyrocooler systems in their entirety. In view of the fact that some components of a cryocooler, e.g., inertance tubes and compliance tanks, can be modelled as 1D components with little loss of critical information, a 1D-2D/3D coupled model was developed. Accordingly, one-dimensional - like components are represented by specifically developed routines. These routines can be coupled to CFD codes and provide boundary conditions for 2D/3D CFD simulations. The developed coupled model, while preserving sufficient flow field details, is two orders of magnitude faster than equivalent 2D/3D CFD models. The predictions show good agreement with experimental data and 2D/3D CFD model.
Elucidating Reaction Mechanisms on Quantum Computers
NASA Astrophysics Data System (ADS)
Wiebe, Nathan; Reiher, Markus; Svore, Krysta; Wecker, Dave; Troyer, Matthias
We show how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1215 ft/sec tip speed transonic fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which for this model did not include a split flow path with core and bypass ducts. As a result, it was only necessary to adjust fan rotational speed in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the flow fields at all operating conditions reveals no excessive boundary layer separations or related secondary-flow problems.
Toma, Milan; Bloodworth, Charles H; Einstein, Daniel R; Pierce, Eric L; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S
2016-12-01
The diversity of mitral valve (MV) geometries and multitude of surgical options for correction of MV diseases necessitates the use of computational modeling. Numerical simulations of the MV would allow surgeons and engineers to evaluate repairs, devices, procedures, and concepts before performing them and before moving on to more costly testing modalities. Constructing, tuning, and validating these models rely upon extensive in vitro characterization of valve structure, function, and response to change due to diseases. Micro-computed tomography ([Formula: see text]CT) allows for unmatched spatial resolution for soft tissue imaging. However, it is still technically challenging to obtain an accurate geometry of the diastolic MV. We discuss here the development of a novel technique for treating MV specimens with glutaraldehyde fixative in order to minimize geometric distortions in preparation for [Formula: see text]CT scanning. The technique provides a resulting MV geometry which is significantly more detailed in chordal structure, accurate in leaflet shape, and closer to its physiological diastolic geometry. In this paper, computational fluid-structure interaction (FSI) simulations are used to show the importance of more detailed subject-specific MV geometry with 3D chordal structure to simulate a proper closure validated against [Formula: see text]CT images of the closed valve. Two computational models, before and after use of the aforementioned technique, are used to simulate closure of the MV.
Experimental and Computational Study of Sonic and Supersonic Jet Plumes
NASA Technical Reports Server (NTRS)
Venkatapathy, E.; Naughton, J. W.; Fletcher, D. G.; Edwards, Thomas A. (Technical Monitor)
1994-01-01
Study of sonic and supersonic jet plumes are relevant to understanding such phenomenon as jet-noise, plume signatures, and rocket base-heating and radiation. Jet plumes are simple to simulate and yet, have complex flow structures such as Mach disks, triple points, shear-layers, barrel shocks, shock-shear-layer interaction, etc. Experimental and computational simulation of sonic and supersonic jet plumes have been performed for under- and over-expanded, axisymmetric plume conditions. The computational simulation compare very well with the experimental observations of schlieren pictures. Experimental data such as temperature measurements with hot-wire probes are yet to be measured and will be compared with computed values. Extensive analysis of the computational simulations presents a clear picture of how the complex flow structure develops and the conditions under which self-similar flow structures evolve. From the computations, the plume structure can be further classified into many sub-groups. In the proposed paper, detail results from the experimental and computational simulations for single, axisymmetric, under- and over-expanded, sonic and supersonic plumes will be compared and the fluid dynamic aspects of flow structures will be discussed.
How to Create, Modify, and Interface Aspen In-House and User Databanks for System Configuration 1:
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camp, D W
2000-10-27
The goal of this document is to provide detailed instructions to create, modify, interface, and test Aspen User and In-House databanks with minimal frustration. The level of instructions are aimed at a novice Aspen Plus simulation user who is neither a programming nor computer-system expert. The instructions are tailored to Version 10.1 of Aspen Plus and the specific computing configuration summarized in the Title of this document and detailed in Section 2. Many details of setting up databanks depend on the computing environment specifics, such as the machines, operating systems, command languages, directory structures, inter-computer communications software, the version ofmore » the Aspen Engine and Graphical User Interface (GUI), and the directory structure of how these were installed.« less
Fluid-Structure Interaction Analysis of Ruptured Mitral Chordae Tendineae.
Toma, Milan; Bloodworth, Charles H; Pierce, Eric L; Einstein, Daniel R; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S
2017-03-01
The chordal structure is a part of mitral valve geometry that has been commonly neglected or simplified in computational modeling due to its complexity. However, these simplifications cannot be used when investigating the roles of individual chordae tendineae in mitral valve closure. For the first time, advancements in imaging, computational techniques, and hardware technology make it possible to create models of the mitral valve without simplifications to its complex geometry, and to quickly run validated computer simulations that more realistically capture its function. Such simulations can then be used for a detailed analysis of chordae-related diseases. In this work, a comprehensive model of a subject-specific mitral valve with detailed chordal structure is used to analyze the distinct role played by individual chordae in closure of the mitral valve leaflets. Mitral closure was simulated for 51 possible chordal rupture points. Resultant regurgitant orifice area and strain change in the chordae at the papillary muscle tips were then calculated to examine the role of each ruptured chorda in the mitral valve closure. For certain subclassifications of chordae, regurgitant orifice area was found to trend positively with ruptured chordal diameter, and strain changes correlated negatively with regurgitant orifice area. Further advancements in clinical imaging modalities, coupled with the next generation of computational techniques will enable more physiologically realistic simulations.
Fluid-Structure Interaction Analysis of Ruptured Mitral Chordae Tendineae
Toma, Milan; Bloodworth, Charles H.; Pierce, Eric L.; Einstein, Daniel R.; Cochran, Richard P.; Yoganathan, Ajit P.; Kunzelman, Karyn S.
2016-01-01
The chordal structure is a part of mitral valve geometry that has been commonly neglected or simplified in computational modeling due to its complexity. However, these simplifications cannot be used when investigating the roles of individual chordae tendineae in mitral valve closure. For the first time, advancements in imaging, computational techniques, and hardware technology make it possible to create models of the mitral valve without simplifications to its complex geometry, and to quickly run validated computer simulations that more realistically capture its function. Such simulations can then be used for a detailed analysis of chordae-related diseases. In this work, a comprehensive model of a subject-specific mitral valve with detailed chordal structure is used to analyze the distinct role played by individual chordae in closure of the mitral valve leaflets. Mitral closure was simulated for 51 possible chordal rupture points. Resultant regurgitant orifice area and strain change in the chordae at the papillary muscle tips were then calculated to examine the role of each ruptured chorda in the mitral valve closure. For certain subclassifications of chordae, regurgitant orifice area was found to trend positively with ruptured chordal diameter, and strain changes correlated negatively with regurgitant orifice area. Further advancements in clinical imaging modalities, coupled with the next generation of computational techniques will enable more physiologically realistic simulations. PMID:27624659
GPS synchronized power system phase angle measurements
NASA Astrophysics Data System (ADS)
Wilson, Robert E.; Sterlina, Patrick S.
1994-09-01
This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.
Computer simulation of the metastatic progression.
Wedemann, Gero; Bethge, Anja; Haustein, Volker; Schumacher, Udo
2014-01-01
A novel computer model based on a discrete event simulation procedure describes quantitatively the processes underlying the metastatic cascade. Analytical functions describe the size of the primary tumor and the metastases, while a rate function models the intravasation events of the primary tumor and metastases. Events describe the behavior of the malignant cells until the formation of new metastases. The results of the computer simulations are in quantitative agreement with clinical data determined from a patient with hepatocellular carcinoma in the liver. The model provides a more detailed view on the process than a conventional mathematical model. In particular, the implications of interventions on metastasis formation can be calculated.
R.B. Ferguson; V. Clark Baldwin
1987-01-01
Complete instructions for user operation of COMPUTE_P-LOB to include detailed examples of computer input and output, of a growth and yield prediction system providing volume and weight yields in stand and stock table format.A complete program listing is provided.
Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.
Dematté, Lorenzo
2012-01-01
Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output
First Steps towards an Interactive Real-Time Hazard Management Simulation
ERIC Educational Resources Information Center
Gemmell, Alastair M. D.; Finlayson, Ian G.; Marston, Philip G.
2010-01-01
This paper reports on the construction and initial testing of a computer-based interactive flood hazard management simulation, designed for undergraduates taking an applied geomorphology course. Details of the authoring interface utilized to create the simulation are presented. Students act as the managers of civil defence utilities in a fictional…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Li, Tingwen
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber’s performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable simulations andmore » manageable computational effort. Previously developed two filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical) on the adsorber’s hydrodynamics and CO2 capture performance are then examined. The simulation result subsequently is compared and contrasted with another predicted by a one-dimensional three-region process model.« less
Design of a bounded wave EMP (Electromagnetic Pulse) simulator
NASA Astrophysics Data System (ADS)
Sevat, P. A. A.
1989-06-01
Electromagnetic Pulse (EMP) simulators are used to simulate the EMP generated by a nuclear weapon and to harden equipment against the effects of EMP. At present, DREO has a 1 m EMP simulator for testing computer terminal size equipment. To develop the R and D capability for testing larger objects, such as a helicopter, a much bigger threat level facility is required. This report concerns the design of a bounded wave EMP simulator suitable for testing large size equipment. Different types of simulators are described and their pros and cons are discussed. A bounded wave parallel plate type simulator is chosen for it's efficiency and the least environmental impact. Detailed designs are given for 6 m and 10 m parallel plate type wire grid simulators. Electromagnetic fields inside and outside the simulators are computed. Preliminary specifications for a pulse generator required for the simulator are also given. Finally, the electromagnetic fields radiated from the simulator are computed and discussed.
NASA Technical Reports Server (NTRS)
Lockard, David P.
2011-01-01
Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.
[The characteristics of computer simulation of traffic accidents].
Zou, Dong-Hua; Liu, Ning-Guo; Chen, Jian-Guo; Jin, Xian-Long; Zhang, Xiao-Yun; Zhang, Jian-Hua; Chen, Yi-Jiu
2008-12-01
To reconstruct the collision process of traffic accident and the injury mode of the victim by computer simulation technology in forensic assessment of traffic accident. Forty actual accidents were reconstructed by stimulation software and high performance computer based on analysis of the trace evidences at the scene, damage of the vehicles and injury of the victims, with 2 cases discussed in details. The reconstruction correlated very well in 28 cases, well in 9 cases, and suboptimal in 3 cases with the above parameters. Accurate reconstruction of the accident would be helpful for assessment of the injury mechanism of the victims. Reconstruction of the collision process of traffic accident and the injury mechanism of the victim by computer simulation is useful in traffic accident assessment.
Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
Zimmerman, M I; Bowman, G R
2016-01-01
Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.
JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning
NASA Astrophysics Data System (ADS)
Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro
2015-12-01
We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.
Sonic and Supersonic Jet Plumes
NASA Technical Reports Server (NTRS)
Venkatapathy, E.; Naughton, J. W.; Flethcher, D. G.; Edwards, Thomas A. (Technical Monitor)
1994-01-01
Study of sonic and supersonic jet plumes are relevant to understanding such phenomenon as jet-noise, plume signatures, and rocket base-heating and radiation. Jet plumes are simple to simulate and yet, have complex flow structures such as Mach disks, triple points, shear-layers, barrel shocks, shock- shear- layer interaction, etc. Experimental and computational simulation of sonic and supersonic jet plumes have been performed for under- and over-expanded, axisymmetric plume conditions. The computational simulation compare very well with the experimental observations of schlieren pictures. Experimental data such as temperature measurements with hot-wire probes are yet to be measured and will be compared with computed values. Extensive analysis of the computational simulations presents a clear picture of how the complex flow structure develops and the conditions under which self-similar flow structures evolve. From the computations, the plume structure can be further classified into many sub-groups. In the proposed paper, detail results from the experimental and computational simulations for single, axisymmetric, under- and over-expanded, sonic and supersonic plumes will be compared and the fluid dynamic aspects of flow structures will be discussed.
NASA Technical Reports Server (NTRS)
Davidson, Frederic M.; Sun, Xiaoli; Field, Christopher T.
1994-01-01
This interim report consists of two reports: 'Space Radiation Effects on Si APDs for GLAS' and 'Computer Simulation of Avalanche Photodiode and Preamplifier Output for Laser Altimeters.' The former contains a detailed description of our proton radiation test of Si APD's performed at the Brookhaven National Laboratory. The latter documents the computer program subroutines which were written for the upgrade of NASA's GLAS simulator.
Computer Simulation For Design Of TWT's
NASA Technical Reports Server (NTRS)
Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard
1992-01-01
A three-dimensional finite-element analytical technique facilitates design and fabrication of traveling-wave-tube (TWT) slow-wave structures. Used to perform thermal and mechanical analyses of TWT designed with variety of configurations, geometries, and materials. Using three-dimensional computer analysis, designer able to simulate building and testing of TWT, with consequent substantial saving of time and money. Technique enables detailed look into operation of traveling-wave tubes to help improve performance for future communications systems.
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2002-01-01
A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.
High performance computing in biology: multimillion atom simulations of nanoscale systems
Sanbonmatsu, K. Y.; Tung, C.-S.
2007-01-01
Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988
FACE computer simulation. [Flexible Arm Controls Experiment
NASA Technical Reports Server (NTRS)
Sadeh, Willy Z.; Szmyd, Jeffrey A.
1990-01-01
A computer simulation of the FACE (Flexible Arm Controls Experiment) was conducted to assess its design for use in the Space Shuttle. The FACE is supposed to be a 14-ft long articulate structure with 4 degrees of freedom, consisting of shoulder pitch and yaw, elbow pitch, and wrist pitch. Kinematics of the FACE was simulated to obtain data on arm operation, function, workspace and interaction. Payload capture ability was modeled. The simulation indicates the capability for detailed kinematic simulation and payload capture ability analysis, and the feasibility of real-time simulation was determined. In addition, the potential for interactive real-time training through integration of the simulation with various interface controllers was revealed. At this stage, the flexibility of the arm was not yet considered.
Post-processing interstitialcy diffusion from molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Bhardwaj, U.; Bukkuru, S.; Warrier, M.
2016-01-01
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures is studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms.
Post-processing interstitialcy diffusion from molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhardwaj, U., E-mail: haptork@gmail.com; Bukkuru, S.; Warrier, M.
2016-01-15
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures ismore » studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms. -- Graphical abstract:.« less
Aeroservoelastic and Flight Dynamics Analysis Using Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Arena, Andrew S., Jr.
1999-01-01
This document in large part is based on the Masters Thesis of Cole Stephens. The document encompasses a variety of technical and practical issues involved when using the STARS codes for Aeroservoelastic analysis of vehicles. The document covers in great detail a number of technical issues and step-by-step details involved in the simulation of a system where aerodynamics, structures and controls are tightly coupled. Comparisons are made to a benchmark experimental program conducted at NASA Langley. One of the significant advantages of the methodology detailed is that as a result of the technique used to accelerate the CFD-based simulation, a systems model is produced which is very useful for developing the control law strategy, and subsequent high-speed simulations.
CFD Simulations in Support of Shuttle Orbiter Contingency Abort Aerodynamic Database Enhancement
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis E.; Prabhu, Dinesh; Wright, Michael; Davies, Carol; McDaniel, Ryan; Venkatapathy, E.; Wercinski, Paul; Gomez, R. J.
2001-01-01
Modern Computational Fluid Dynamics (CFD) techniques were used to compute aerodynamic forces and moments of the Space Shuttle Orbiter in specific portions of contingency abort trajectory space. The trajectory space covers a Mach number range of 3.5-15, an angle-of-attack range of 20deg-60deg, an altitude range of 100-190 kft, and several different settings of the control surfaces (elevons, body flap, and speed brake). Presented here are details of the methodology and comparisons of computed aerodynamic coefficients against the values in the current Orbiter Operational Aerodynamic Data Book (OADB). While approximately 40 cases have been computed, only a sampling of the results is provided here. The computed results, in general, are in good agreement with the OADB data (i.e., within the uncertainty bands) for almost all the cases. However, in a limited number of high angle-of-attack cases (at Mach 15), there are significant differences between the computed results, especially the vehicle pitching moment, and the OADB data. A preliminary analysis of the data from the CFD simulations at Mach 15 shows that these differences can be attributed to real-gas/Mach number effects. The aerodynamic coefficients and detailed surface pressure distributions of the present simulations are being used by the Shuttle Program in the evaluation of the capabilities of the Orbiter in contingency abort scenarios.
T-H-A-T-S: timber-harvesting-and-transport-simulator: with subroutines for Appalachian logging
A. Jeff Martin
1975-01-01
A computer program for simulating harvesting operations is presented. Written in FORTRAN IV, the program contains subroutines that were developed for Appalachian logging conditions. However, with appropriate modifications, the simulator would be applicable for most logging operations and locations. The details of model development and its methodology are presented,...
Parallel Simulation of Subsonic Fluid Dynamics on a Cluster of Workstations.
1994-11-01
inside wind musical instruments. Typical simulations achieve $80\\%$ parallel efficiency (speedup/processors) using 20 HP-Apollo workstations. Detailed...TERMS AI, MIT, Artificial Intelligence, Distributed Computing, Workstation Cluster, Network, Fluid Dynamics, Musical Instruments 17. SECURITY...for example, the flow of air inside wind musical instruments. Typical simulations achieve 80% parallel efficiency (speedup/processors) using 20 HP
Torak, L.J.
1993-01-01
A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.
Flow in curved ducts of varying cross-section
NASA Astrophysics Data System (ADS)
Sotiropoulos, F.; Patel, V. C.
1992-07-01
Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.; Chima, Rodrick V.; Turkel, Eli
1997-01-01
A preconditioning scheme has been implemented into a three-dimensional viscous computational fluid dynamics code for turbomachine blade rows. The preconditioning allows the code, originally developed for simulating compressible flow fields, to be applied to nearly-incompressible, low Mach number flows. A brief description is given of the compressible Navier-Stokes equations for a rotating coordinate system, along with the preconditioning method employed. Details about the conservative formulation of artificial dissipation are provided, and different artificial dissipation schemes are discussed and compared. The preconditioned code was applied to a well-documented case involving the NASA large low-speed centrifugal compressor for which detailed experimental data are available for comparison. Performance and flow field data are compared for the near-design operating point of the compressor, with generally good agreement between computation and experiment. Further, significant differences between computational results for the different numerical implementations, revealing different levels of solution accuracy, are discussed.
ReaDDy - A Software for Particle-Based Reaction-Diffusion Dynamics in Crowded Cellular Environments
Schöneberg, Johannes; Noé, Frank
2013-01-01
We introduce the software package ReaDDy for simulation of detailed spatiotemporal mechanisms of dynamical processes in the cell, based on reaction-diffusion dynamics with particle resolution. In contrast to other particle-based reaction kinetics programs, ReaDDy supports particle interaction potentials. This permits effects such as space exclusion, molecular crowding and aggregation to be modeled. The biomolecules simulated can be represented as a sphere, or as a more complex geometry such as a domain structure or polymer chain. ReaDDy bridges the gap between small-scale but highly detailed molecular dynamics or Brownian dynamics simulations and large-scale but little-detailed reaction kinetics simulations. ReaDDy has a modular design that enables the exchange of the computing core by efficient platform-specific implementations or dynamical models that are different from Brownian dynamics. PMID:24040218
Computational Analysis of a Prototype Martian Rotorcraft Experiment
NASA Technical Reports Server (NTRS)
Corfeld, Kelly J.; Strawn, Roger C.; Long, Lyle N.
2002-01-01
This paper presents Reynolds-averaged Navier-Stokes calculations for a prototype Martian rotorcraft. The computations are intended for comparison with an ongoing Mars rotor hover test at NASA Ames Research Center. These computational simulations present a new and challenging problem, since rotors that operate on Mars will experience a unique low Reynolds number and high Mach number environment. Computed results for the 3-D rotor differ substantially from 2-D sectional computations in that the 3-D results exhibit a stall delay phenomenon caused by rotational forces along the blade span. Computational results have yet to be compared to experimental data, but computed performance predictions match the experimental design goals fairly well. In addition, the computed results provide a high level of detail in the rotor wake and blade surface aerodynamics. These details provide an important supplement to the expected experimental performance data.
NASA Technical Reports Server (NTRS)
Oluwole, Oluwayemisi O.; Wong, Hsi-Wu; Green, William
2012-01-01
AdapChem software enables high efficiency, low computational cost, and enhanced accuracy on computational fluid dynamics (CFD) numerical simulations used for combustion studies. The software dynamically allocates smaller, reduced chemical models instead of the larger, full chemistry models to evolve the calculation while ensuring the same accuracy to be obtained for steady-state CFD reacting flow simulations. The software enables detailed chemical kinetic modeling in combustion CFD simulations. AdapChem adapts the reaction mechanism used in the CFD to the local reaction conditions. Instead of a single, comprehensive reaction mechanism throughout the computation, a dynamic distribution of smaller, reduced models is used to capture accurately the chemical kinetics at a fraction of the cost of the traditional single-mechanism approach.
NASA Technical Reports Server (NTRS)
Yanosy, J. L.; Rowell, L. F.
1985-01-01
Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.
Thermodynamic forces in coarse-grained simulations
NASA Astrophysics Data System (ADS)
Noid, William
Atomically detailed molecular dynamics simulations have profoundly advanced our understanding of the structure and interactions in soft condensed phases. Nevertheless, despite dramatic advances in the methodology and resources for simulating atomically detailed models, low-resolution coarse-grained (CG) models play a central and rapidly growing role in science. CG models not only empower researchers to investigate phenomena beyond the scope of atomically detailed simulations, but also to precisely tailor models for specific phenomena. However, in contrast to atomically detailed simulations, which evolve on a potential energy surface, CG simulations should evolve on a free energy surface. Therefore, the forces in CG models should reflect the thermodynamic information that has been eliminated from the CG configuration space. As a consequence of these thermodynamic forces, CG models often demonstrate limited transferability and, moreover, rarely provide an accurate description of both structural and thermodynamic properties. In this talk, I will present a framework that clarifies the origin and impact of these thermodynamic forces. Additionally, I will present computational methods for quantifying these forces and incorporating their effects into CG MD simulations. As time allows, I will demonstrate applications of this framework for liquids, polymers, and interfaces. We gratefully acknowledge the support of the National Science Foundation via CHE 1565631.
Goodman, Dan F M; Brette, Romain
2009-09-01
"Brian" is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on flexibility: new and non-standard models are no more difficult to define than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defined by writing differential equations in standard mathematical notation, facilitating scientific communication. Brian is written in the Python programming language, and uses vector-based computation to allow for efficient simulations. It is particularly useful for neuroscientific modelling at the systems level, and for teaching computational neuroscience.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mereghetti, Paolo; Martinez, M.; Wade, Rebecca C.
Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulatemore » solutions of bovine serum albumin and of hen egg white lysozyme.« less
2008-03-01
Appendix 82 MatLab© Cd Calculator Routine FORTRAN© Subroutine of the Variable Cd Model ii ABBREVIATIONS & ACRONYMS Cd...Figure 29. Overview Flowchart of Benét Labs Recoil Analysis Code Figure 30. Overview Flowchart of Recoil Brake Subroutine Figure 31...Detail Flowchart of Recoil Pressure/Force Calculations Figure 32. Detail Flowchart of Variable Cd Subroutine Figure 33. Simulated Brake
Cloud-based simulations on Google Exacycle reveal ligand modulation of GPCR activation pathways
NASA Astrophysics Data System (ADS)
Kohlhoff, Kai J.; Shukla, Diwakar; Lawrenz, Morgan; Bowman, Gregory R.; Konerding, David E.; Belov, Dan; Altman, Russ B.; Pande, Vijay S.
2014-01-01
Simulations can provide tremendous insight into the atomistic details of biological mechanisms, but micro- to millisecond timescales are historically only accessible on dedicated supercomputers. We demonstrate that cloud computing is a viable alternative that brings long-timescale processes within reach of a broader community. We used Google's Exacycle cloud-computing platform to simulate two milliseconds of dynamics of a major drug target, the G-protein-coupled receptor β2AR. Markov state models aggregate independent simulations into a single statistical model that is validated by previous computational and experimental results. Moreover, our models provide an atomistic description of the activation of a G-protein-coupled receptor and reveal multiple activation pathways. Agonists and inverse agonists interact differentially with these pathways, with profound implications for drug design.
Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats
NASA Astrophysics Data System (ADS)
Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.
2018-03-01
Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.
A computer simulation of an adaptive noise canceler with a single input
NASA Astrophysics Data System (ADS)
Albert, Stuart D.
1991-06-01
A description of an adaptive noise canceler using Widrows' LMS algorithm is presented. A computer simulation of canceler performance (adaptive convergence time and frequency transfer function) was written for use as a design tool. The simulations, assumptions, and input parameters are described in detail. The simulation is used in a design example to predict the performance of an adaptive noise canceler in the simultaneous presence of both strong and weak narrow-band signals (a cosited frequency hopping radio scenario). On the basis of the simulation results, it is concluded that the simulation is suitable for use as an adaptive noise canceler design tool; i.e., it can be used to evaluate the effect of design parameter changes on canceler performance.
Quantum simulation from the bottom up: the case of rebits
NASA Astrophysics Data System (ADS)
Enshan Koh, Dax; Yuezhen Niu, Murphy; Yoder, Theodore J.
2018-05-01
Typically, quantum mechanics is thought of as a linear theory with unitary evolution governed by the Schrödinger equation. While this is technically true and useful for a physicist, with regards to computation it is an unfortunately narrow point of view. Just as a classical computer can simulate highly nonlinear functions of classical states, so too can the more general quantum computer simulate nonlinear evolutions of quantum states. We detail one particular simulation of nonlinearity on a quantum computer, showing how the entire class of -unitary evolutions (on n qubits) can be simulated using a unitary, real-amplitude quantum computer (consisting of n + 1 qubits in total). These operators can be represented as the sum of a linear and antilinear operator, and add an intriguing new set of nonlinear quantum gates to the toolbox of the quantum algorithm designer. Furthermore, a subgroup of these nonlinear evolutions, called the -Cliffords, can be efficiently classically simulated, by making use of the fact that Clifford operators can simulate non-Clifford (in fact, non-linear) operators. This perspective of using the physical operators that we have to simulate non-physical ones that we do not is what we call bottom-up simulation, and we give some examples of its broader implications.
WEST-3 wind turbine simulator development
NASA Technical Reports Server (NTRS)
Hoffman, J. A.; Sridhar, S.
1985-01-01
The software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator is given. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software generated executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particulr, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included are: details of the aeroelastic rotor analysis, which is the center of a wind turbine simulation model, analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.
Computer Graphics and Physics Teaching.
ERIC Educational Resources Information Center
Bork, Alfred M.; Ballard, Richard
New, more versatile and inexpensive terminals will make computer graphics more feasible in science instruction than before. This paper describes the use of graphics in physics teaching at the University of California at Irvine. Commands and software are detailed in established programs, which include a lunar landing simulation and a program which…
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A Model Description Document for the Emulation Simulation Computer Model was already published. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation simulation combination in the design, development, and test of a piece of ARS hardware, SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from test. Also, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system. The second consists of a potential air revitalization system.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.
Computational Flow Modeling of Human Upper Airway Breathing
NASA Astrophysics Data System (ADS)
Mylavarapu, Goutham
Computational modeling of biological systems have gained a lot of interest in biomedical research, in the recent past. This thesis focuses on the application of computational simulations to study airflow dynamics in human upper respiratory tract. With advancements in medical imaging, patient specific geometries of anatomically accurate respiratory tracts can now be reconstructed from Magnetic Resonance Images (MRI) or Computed Tomography (CT) scans, with better and accurate details than traditional cadaver cast models. Computational studies using these individualized geometrical models have advantages of non-invasiveness, ease, minimum patient interaction, improved accuracy over experimental and clinical studies. Numerical simulations can provide detailed flow fields including velocities, flow rates, airway wall pressure, shear stresses, turbulence in an airway. Interpretation of these physical quantities will enable to develop efficient treatment procedures, medical devices, targeted drug delivery etc. The hypothesis for this research is that computational modeling can predict the outcomes of a surgical intervention or a treatment plan prior to its application and will guide the physician in providing better treatment to the patients. In the current work, three different computational approaches Computational Fluid Dynamics (CFD), Flow-Structure Interaction (FSI) and Particle Flow simulations were used to investigate flow in airway geometries. CFD approach assumes airway wall as rigid, and relatively easy to simulate, compared to the more challenging FSI approach, where interactions of airway wall deformations with flow are also accounted. The CFD methodology using different turbulence models is validated against experimental measurements in an airway phantom. Two case-studies using CFD, to quantify a pre and post-operative airway and another, to perform virtual surgery to determine the best possible surgery in a constricted airway is demonstrated. The unsteady Large Eddy simulations (LES) and a steady Reynolds Averaged Navier Stokes (RANS) approaches in CFD modeling are discussed. The more challenging FSI approach is modeled first in simple two-dimensional anatomical geometry and then extended to simplified three dimensional geometry and finally in three dimensionally accurate geometries. The concepts of virtual surgery and the differences to CFD are discussed. Finally, the influence of various drug delivery parameters on particle deposition efficiency in airway anatomy are investigated through particle-flow simulations in a nasal airway model.
NASA Astrophysics Data System (ADS)
Leskiw, Donald M.; Zhau, Junmei
2000-06-01
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
NASA Technical Reports Server (NTRS)
Barber, Bryan; Kahn, Laura; Wong, David
1990-01-01
Offshore operations such as oil drilling and radar monitoring require semisubmersible platforms to remain stationary at specific locations in the Gulf of Mexico. Ocean currents, wind, and waves in the Gulf of Mexico tend to move platforms away from their desired locations. A computer model was created to predict the station keeping requirements of a platform. The computer simulation uses remote sensing data from satellites and buoys as input. A background of the project, alternate approaches to the project, and the details of the simulation are presented.
NASA Technical Reports Server (NTRS)
Fleischer, G. E.
1973-01-01
A new computer subroutine, which solves the attitude equations of motion for any vehicle idealized as a topological tree of hinge-connected rigid bodies, is used to simulate and analyze science instrument pointing control interaction with a flexible Mariner Venus/Mercury (MVM) spacecraft. The subroutine's user options include linearized or partially linearized hinge-connected models whose computational advantages are demonstrated for the MVM problem. Results of the pointing control/flexible vehicle interaction simulations, including imaging experiment pointing accuracy predictions and implications for MVM science sequence planning, are described in detail.
2002 Computing and Interdisciplinary Systems Office Review and Planning Meeting
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Gregory; Lopez, Isaac; Veres, Joseph; Lavelle, Thomas; Sehra, Arun; Freeh, Josh; Hah, Chunill
2003-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with NASA Glenn s Propulsion program, NASA Ames, industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This year s review meeting describes the current status of the NPSS and the Object Oriented Development Kit with specific emphasis on the progress made over the past year on air breathing propulsion applications for aeronautics and space transportation applications. Major accomplishments include the first 3-D simulation of the primary flow path of a large turbofan engine in less than 15 hours, and the formal release of the NPSS Version 1.5 that includes elements of rocket engine systems and a visual based syntax layer. NPSS and the Development Kit are managed by the Computing and Interdisciplinary Systems Office (CISO) at the NASA Glenn Research Center and financially supported in fiscal year 2002 by the Computing, Networking and Information Systems (CNIS) project managed at NASA Ames, the Glenn Aerospace Propulsion and Power Program and the Advanced Space Transportation Program.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Owens, A. J.
1975-01-01
A detailed description is presented of the computer programs in order to provide an understanding of the mathematical and geometrical relationships as implemented in the programs. The individual sbbroutines and their underlying mathematical relationships are described, and the required input data and the output provided by the program are explained. The relationship of the adaptive maneuvering logic program with the program to drive the differential maneuvering simulator is discussed.
Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Hinkley, Jeffrey A.
2003-01-01
The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.
Point-Process Models of Social Network Interactions: Parameter Estimation and Missing Data Recovery
2014-08-01
treating them as zero will have a de minimis impact on the results, but avoiding computing them (and computing with them) saves tremendous time. Set a... test the methods on simulated time series on artificial social networks, including some toy networks and some meant to resemble IkeNet. We conclude...the section by discussing the results in detail. In each of our tests we begin with a complete data set, whether it is real (IkeNet) or simulated. Then
Davis, Matthew H.
2016-01-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209
Efficiency and Accuracy in Thermal Simulation of Powder Bed Fusion of Bulk Metallic Glass
NASA Astrophysics Data System (ADS)
Lindwall, J.; Malmelöv, A.; Lundbäck, A.; Lindgren, L.-E.
2018-05-01
Additive manufacturing by powder bed fusion processes can be utilized to create bulk metallic glass as the process yields considerably high cooling rates. However, there is a risk that reheated material set in layers may become devitrified, i.e., crystallize. Therefore, it is advantageous to simulate the process to fully comprehend it and design it to avoid the aforementioned risk. However, a detailed simulation is computationally demanding. It is necessary to increase the computational speed while maintaining accuracy of the computed temperature field in critical regions. The current study evaluates a few approaches based on temporal reduction to achieve this. It is found that the evaluated approaches save a lot of time and accurately predict the temperature history.
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
Use of computational fluid dynamics in respiratory medicine.
Fernández Tena, Ana; Casan Clarà, Pere
2015-06-01
Computational Fluid Dynamics (CFD) is a computer-based tool for simulating fluid movement. The main advantages of CFD over other fluid mechanics studies include: substantial savings in time and cost, the analysis of systems or conditions that are very difficult to simulate experimentally (as is the case of the airways), and a practically unlimited level of detail. We used the Ansys-Fluent CFD program to develop a conducting airway model to simulate different inspiratory flow rates and the deposition of inhaled particles of varying diameters, obtaining results consistent with those reported in the literature using other procedures. We hope this approach will enable clinicians to further individualize the treatment of different respiratory diseases. Copyright © 2014 SEPAR. Published by Elsevier Espana. All rights reserved.
A 2.5D Computational Method to Simulate Cylindrical Fluidized Beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tingwen; Benyahia, Sofiane; Dietiker, Jeff
2015-02-17
In this paper, the limitations of axisymmetric and Cartesian two-dimensional (2D) simulations of cylindrical gas-solid fluidized beds are discussed. A new method has been proposed to carry out pseudo-two-dimensional (2.5D) simulations of a cylindrical fluidized bed by appropriately combining computational domains of Cartesian 2D and axisymmetric simulations. The proposed method was implemented in the open-source code MFIX and applied to the simulation of a lab-scale bubbling fluidized bed with necessary sensitivity study. After a careful grid study to ensure the numerical results are grid independent, detailed comparisons of the flow hydrodynamics were presented against axisymmetric and Cartesian 2D simulations. Furthermore,more » the 2.5D simulation results have been compared to the three-dimensional (3D) simulation for evaluation. This new approach yields better agreement with the 3D simulation results than with axisymmetric and Cartesian 2D simulations.« less
ROMI-RIP: Rough mill rip-first simulator. Forest Service general technical report (Final)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, R.E.
1995-07-01
The ROugh Mill Rip-First Simulator (ROMI-RIP) is a computer software package that simulates the gang-ripping of lumber. ROMI-RIP was designed to closely simulate current machines and industrial practice. This simulator allows the user to perform `what if` analyses on various gang-rip-first rough mill operations with fixed, floating outer blade and all-movable blade arbors. ROMI-RIP accepts cutting bills with up to 300 different part sizes. Plots of processed boards are easily viewed or printed. Detailed summaries of processing steps (number of rips and crosscuts) and yields (single boards or entire board files) can also be viewed of printed. ROMI-RIP requires IBMmore » personal computers with 80286 of higher processors.« less
Parallelisation study of a three-dimensional environmental flow model
NASA Astrophysics Data System (ADS)
O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank
2014-03-01
There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.
Sacco, Federica; Paun, Bruno; Lehmkuhl, Oriol; Iles, Tinen L.; Iaizzo, Paul A.; Houzeaux, Guillaume; Vázquez, Mariano; Butakoff, Constantine; Aguado-Sierra, Jazmin
2018-01-01
The aim of the present study is to characterize the hemodynamics of left ventricular (LV) geometries to examine the impact of trabeculae and papillary muscles (PMs) on blood flow using high performance computing (HPC). Five pairs of detailed and smoothed LV endocardium models were reconstructed from high-resolution magnetic resonance images (MRI) of ex-vivo human hearts. The detailed model of one LV pair is characterized only by the PMs and few big trabeculae, to represent state of art level of endocardial detail. The other four detailed models obtained include instead endocardial structures measuring ≥1 mm2 in cross-sectional area. The geometrical characterizations were done using computational fluid dynamics (CFD) simulations with rigid walls and both constant and transient flow inputs on the detailed and smoothed models for comparison. These simulations do not represent a clinical or physiological scenario, but a characterization of the interaction of endocardial structures with blood flow. Steady flow simulations were employed to quantify the pressure drop between the inlet and the outlet of the LVs and the wall shear stress (WSS). Coherent structures were analyzed using the Q-criterion for both constant and transient flow inputs. Our results show that trabeculae and PMs increase the intra-ventricular pressure drop, reduce the WSS and disrupt the dominant single vortex, usually present in the smoothed-endocardium models, generating secondary small vortices. Given that obtaining high resolution anatomical detail is challenging in-vivo, we propose that the effect of trabeculations can be incorporated into smoothed ventricular geometries by adding a porous layer along the LV endocardial wall. Results show that a porous layer of a thickness of 1.2·10−2 m with a porosity of 20 kg/m2 on the smoothed-endocardium ventricle models approximates the pressure drops, vorticities and WSS observed in the detailed models. PMID:29760665
Accelerating cardiac bidomain simulations using graphics processing units.
Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G
2012-08-01
Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.
Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units
Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf
2013-01-01
Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867
Protein free energy landscapes from long equilibrium simulations
NASA Astrophysics Data System (ADS)
Piana-Agostinetti, Stefano
Many computational techniques based on molecular dynamics (MD) simulation can be used to generate data to aid in the construction of protein free energy landscapes with atomistic detail. Unbiased, long, equilibrium MD simulations--although computationally very expensive--are particularly appealing, as they can provide direct kinetic and thermodynamic information on the transitions between the states that populate a protein free energy surface. It can be challenging to know how to analyze and interpret even results generated by this direct technique, however. I will discuss approaches we have employed, using equilibrium MD simulation data, to obtain descriptions of the free energy landscapes of proteins ranging in size from tens to thousands of amino acids.
Goodman, Dan F. M.; Brette, Romain
2009-01-01
“Brian” is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on flexibility: new and non-standard models are no more difficult to define than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defined by writing differential equations in standard mathematical notation, facilitating scientific communication. Brian is written in the Python programming language, and uses vector-based computation to allow for efficient simulations. It is particularly useful for neuroscientific modelling at the systems level, and for teaching computational neuroscience. PMID:20011141
Summary Report of Working Group 2: Computation
NASA Astrophysics Data System (ADS)
Stoltz, P. H.; Tsung, R. S.
2009-01-01
The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.
Summary Report of Working Group 2: Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltz, P. H.; Tsung, R. S.
2009-01-22
The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less
A Computational Observer For Performing Contrast-Detail Analysis Of Ultrasound Images
NASA Astrophysics Data System (ADS)
Lopez, H.; Loew, M. H.
1988-06-01
Contrast-Detail (C/D) analysis allows the quantitative determination of an imaging system's ability to display a range of varying-size targets as a function of contrast. Using this technique, a contrast-detail plot is obtained which can, in theory, be used to compare image quality from one imaging system to another. The C/D plot, however, is usually obtained by using data from human observer readings. We have shown earlier(7) that the performance of human observers in the task of threshold detection of simulated lesions embedded in random ultrasound noise is highly inaccurate and non-reproducible for untrained observers. We present an objective, computational method for the determination of the C/D curve for ultrasound images. This method utilizes digital images of the C/D phantom developed at CDRH, and lesion-detection algorithms that simulate the Bayesian approach using the likelihood function for an ideal observer. We present the results of this method, and discuss the relationship to the human observer and to the comparability of image quality between systems.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.
Strong scaling and speedup to 16,384 processors in cardiac electro-mechanical simulations.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
High performance computing is required to make feasible simulations of whole organ models of the heart with biophysically detailed cellular models in a clinical setting. Increasing model detail by simulating electrophysiology and mechanical models increases computation demands. We present scaling results of an electro - mechanical cardiac model of two ventricles and compare them to our previously published results using an electrophysiological model only. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Fiber orientation was included. Data decomposition for the distribution onto the distributed memory system was carried out by orthogonal recursive bisection. Load weight ratios for non-tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100. The ten Tusscher et al. (2004) electrophysiological cell model was used and the Rice et al. (1999) model for the computation of the calcium transient dependent force. Scaling results for 512, 1024, 2048, 4096, 8192 and 16,384 processors were obtained for 1 ms simulation time. The simulations were carried out on an IBM Blue Gene/L supercomputer. The results show linear scaling from 512 to 16,384 processors with speedup factors between 1.82 and 2.14 between partitions. The most optimal load ratio was 1:25 for on all partitions. However, a shift towards load ratios with higher weight for the tissue elements can be recognized as can be expected when adding computational complexity to the model while keeping the same communication setup. This work demonstrates that it is potentially possible to run simulations of 0.5 s using the presented electro-mechanical cardiac model within 1.5 hours.
Simulation system architecture design for generic communications link
NASA Technical Reports Server (NTRS)
Tsang, Chit-Sang; Ratliff, Jim
1986-01-01
This paper addresses a computer simulation system architecture design for generic digital communications systems. It addresses the issues of an overall system architecture in order to achieve a user-friendly, efficient, and yet easily implementable simulation system. The system block diagram and its individual functional components are described in detail. Software implementation is discussed with the VAX/VMS operating system used as a target environment.
Evaluation of a Text Compression Algorithm Against Computer-Aided Instruction (CAI) Material.
ERIC Educational Resources Information Center
Knight, Joseph M., Jr.
This report describes the initial evaluation of a text compression algorithm against computer assisted instruction (CAI) material. A review of some concepts related to statistical text compression is followed by a detailed description of a practical text compression algorithm. A simulation of the algorithm was programed and used to obtain…
Providing scalable system software for high-end simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, D.
1997-12-31
Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.
NASA Technical Reports Server (NTRS)
Assanis, D. N.; Ekchian, J. E.; Frank, R. M.; Heywood, J. B.
1985-01-01
A computer simulation of the turbocharged turbocompounded direct-injection diesel engine system was developed in order to study the performance characteristics of the total system as major design parameters and materials are varied. Quasi-steady flow models of the compressor, turbines, manifolds, intercooler, and ducting are coupled with a multicylinder reciprocator diesel model, where each cylinder undergoes the same thermodynamic cycle. The master cylinder model describes the reciprocator intake, compression, combustion and exhaust processes in sufficient detail to define the mass and energy transfers in each subsystem of the total engine system. Appropriate thermal loading models relate the heat flow through critical system components to material properties and design details. From this information, the simulation predicts the performance gains, and assesses the system design trade-offs which would result from the introduction of selected heat transfer reduction materials in key system components, over a range of operating conditions.
Recent advances in computational methodology for simulation of mechanical circulatory assist devices
Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek
2014-01-01
Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607
Capsule modeling of high foot implosion experiments on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Capsule modeling of high foot implosion experiments on the National Ignition Facility
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...
2017-03-21
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Details of insect wing design and deformation enhance aerodynamic function and flight efficiency.
Young, John; Walker, Simon M; Bomphrey, Richard J; Taylor, Graham K; Thomas, Adrian L R
2009-09-18
Insect wings are complex structures that deform dramatically in flight. We analyzed the aerodynamic consequences of wing deformation in locusts using a three-dimensional computational fluid dynamics simulation based on detailed wing kinematics. We validated the simulation against smoke visualizations and digital particle image velocimetry on real locusts. We then used the validated model to explore the effects of wing topography and deformation, first by removing camber while keeping the same time-varying twist distribution, and second by removing camber and spanwise twist. The full-fidelity model achieved greater power economy than the uncambered model, which performed better than the untwisted model, showing that the details of insect wing topography and deformation are important aerodynamically. Such details are likely to be important in engineering applications of flapping flight.
Aeroelastic-Acoustics Simulation of Flight Systems
NASA Technical Reports Server (NTRS)
Gupta, kajal K.; Choi, S.; Ibrahim, A.
2009-01-01
This paper describes the details of a numerical finite element (FE) based analysis procedure and a resulting code for the simulation of the acoustics phenomenon arising from aeroelastic interactions. Both CFD and structural simulations are based on FE discretization employing unstructured grids. The sound pressure level (SPL) on structural surfaces is calculated from the root mean square (RMS) of the unsteady pressure and the acoustic wave frequencies are computed from a fast Fourier transform (FFT) of the unsteady pressure distribution as a function of time. The resulting tool proves to be unique as it is designed to analyze complex practical problems, involving large scale computations, in a routine fashion.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2017-02-01
Current report considers development of a unified Monte Carlo (MC) -based computational model for simulation of propagation of Laguerre-Gaussian (LG) beams in turbid tissue-like scattering medium. With a primary goal to proof the concept of using complex light for tissue diagnosis we explore propagation of LG beams in comparison with Gaussian beams for both linear and circular polarization. MC simulations of radially and azimuthally polarized LG beams in turbid media have been performed, classic phenomena such as preservation of the orbital angular momentum, optical memory and helicity flip are observed, detailed comparison is presented and discussed.
Beat frequency interference pattern characteristics study
NASA Technical Reports Server (NTRS)
Ott, J. H.; Rice, J. S.
1981-01-01
The frequency spectra and corresponding beat frequencies created by the relative motions between multiple Solar Power Satellites due to solar wind, lunar gravity, etc. were analyzed. The results were derived mathematically and verified through computer simulation. Frequency spectra plots were computer generated. Detailed computations were made for the seven following locations in the continental US: Houston, Tx.; Seattle, Wa.; Miami, Fl.; Chicago, Il.; New York, NY; Los Angeles, Ca.; and Barberton, Oh.
Simulation based planning of surgical interventions in pediatric cardiology
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2013-10-01
Hemodynamics plays an essential role in the progression and treatment of cardiovascular disease. However, while medical imaging provides increasingly detailed anatomical information, clinicians often have limited access to hemodynamic data that may be crucial to patient risk assessment and treatment planning. Computational simulations can now provide detailed hemodynamic data to augment clinical knowledge in both adult and pediatric applications. There is a particular need for simulation tools in pediatric cardiology, due to the wide variation in anatomy and physiology in congenital heart disease patients, necessitating individualized treatment plans. Despite great strides in medical imaging, enabling extraction of flow information from magnetic resonance and ultrasound imaging, simulations offer predictive capabilities that imaging alone cannot provide. Patient specific simulations can be used for in silico testing of new surgical designs, treatment planning, device testing, and patient risk stratification. Furthermore, simulations can be performed at no direct risk to the patient. In this paper, we outline the current state of the art in methods for cardiovascular blood flow simulation and virtual surgery. We then step through pressing challenges in the field, including multiscale modeling, boundary condition selection, optimization, and uncertainty quantification. Finally, we summarize simulation results of two representative examples from pediatric cardiology: single ventricle physiology, and coronary aneurysms caused by Kawasaki disease. These examples illustrate the potential impact of computational modeling tools in the clinical setting.
Computational Biochemistry-Enzyme Mechanisms Explored.
Culka, Martin; Gisdon, Florian J; Ullmann, G Matthias
2017-01-01
Understanding enzyme mechanisms is a major task to achieve in order to comprehend how living cells work. Recent advances in biomolecular research provide huge amount of data on enzyme kinetics and structure. The analysis of diverse experimental results and their combination into an overall picture is, however, often challenging. Microscopic details of the enzymatic processes are often anticipated based on several hints from macroscopic experimental data. Computational biochemistry aims at creation of a computational model of an enzyme in order to explain microscopic details of the catalytic process and reproduce or predict macroscopic experimental findings. Results of such computations are in part complementary to experimental data and provide an explanation of a biochemical process at the microscopic level. In order to evaluate the mechanism of an enzyme, a structural model is constructed which can be analyzed by several theoretical approaches. Several simulation methods can and should be combined to get a reliable picture of the process of interest. Furthermore, abstract models of biological systems can be constructed combining computational and experimental data. In this review, we discuss structural computational models of enzymatic systems. We first discuss various models to simulate enzyme catalysis. Furthermore, we review various approaches how to characterize the enzyme mechanism both qualitatively and quantitatively using different modeling approaches. © 2017 Elsevier Inc. All rights reserved.
Three-dimensional surgical simulation.
Cevidanes, Lucia H C; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael
2010-09-01
In this article, we discuss the development of methods for computer-aided jaw surgery, which allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3-dimensional surface models from cone-beam computed tomography, dynamic cephalometry, semiautomatic mirroring, interactive cutting of bone, and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intraoperative guidance. The system provides further intraoperative assistance with a computer display showing jaw positions and 3-dimensional positioning guides updated in real time during the surgical procedure. The computer-aided surgery system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training, and assessing the difficulties of the surgical procedures before the surgery. Computer-aided surgery can make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
The MasPar MP-1 As a Computer Arithmetic Laboratory
Anuta, Michael A.; Lozier, Daniel W.; Turner, Peter R.
1996-01-01
This paper is a blueprint for the use of a massively parallel SIMD computer architecture for the simulation of various forms of computer arithmetic. The particular system used is a DEC/MasPar MP-1 with 4096 processors in a square array. This architecture has many advantages for such simulations due largely to the simplicity of the individual processors. Arithmetic operations can be spread across the processor array to simulate a hardware chip. Alternatively they may be performed on individual processors to allow simulation of a massively parallel implementation of the arithmetic. Compromises between these extremes permit speed-area tradeoffs to be examined. The paper includes a description of the architecture and its features. It then summarizes some of the arithmetic systems which have been, or are to be, implemented. The implementation of the level-index and symmetric level-index, LI and SLI, systems is described in some detail. An extensive bibliography is included. PMID:27805123
Tactical Action Officer Intelligent Tutoring System (TAO ITS)
2006-01-01
scenario. As well as the intrinsic feedback that free - play simulations naturally provide a student, the TAO ITS provides detailed, useful extrinsic feedback...incorporate use of free - play simulators into their curriculum, affordably. This is a major shortcoming of conventional CBT as student manipulation of...tutoring systems are ideal for incorporating desktop free - play simulators into computer-based training since the software can stand in for a human
Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...
2013-01-01
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less
22 CFR 124.2 - Exemptions for training and military service.
Code of Federal Regulations, 2010 CFR
2010-04-01
... methods and tools include the development and/or use of mockups, computer models and simulations, and test facilities. (iii) Manufacturing know-how, such as: Information that provides detailed manufacturing processes...
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
Hunter, Kendall S.; Lanning, Craig J.; Chen, Shiuh-Yung J.; Zhang, Yanhang; Garg, Ruchira; Ivy, D. Dunbar; Shandas, Robin
2014-01-01
Clinical imaging methods are highly effective in the diagnosis of vascular pathologies, but they do not currently provide enough detail to shed light on the cause or progression of such diseases, and would be hard pressed to foresee the outcome of surgical interventions. Greater detail of and prediction capabilities for vascular hemodynamics and arterial mechanics are obtained here through the coupling of clinical imaging methods with computational techniques. Three-dimensional, patient-specific geometric reconstructions of the pediatric proximal pulmonary vasculature were obtained from x-ray angiogram images and meshed for use with commercial computational software. Two such models from hypertensive patients, one with multiple septal defects, the other who underwent vascular reactivity testing, were each completed with two sets of suitable fluid and structural initial and boundary conditions and used to obtain detailed transient simulations of artery wall motion and hemodynamics in both clinically measured and predicted configurations. The simulation of septal defect closure, in which input flow and proximal vascular stiffness were decreased, exhibited substantial decreases in proximal velocity, wall shear stress (WSS), and pressure in the post-op state. The simulation of vascular reactivity, in which distal vascular resistance and proximal vascular stiffness were decreased, displayed negligible changes in velocity and WSS but a significant drop in proximal pressure in the reactive state. This new patient-specific technique provides much greater detail regarding the function of the pulmonary circuit than can be obtained with current medical imaging methods alone, and holds promise for enabling surgical planning. PMID:16813447
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.
1991-01-01
A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
2014-01-01
Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516
Application of Simulation to Individualized Self-Paced Training. Final Report. TAEG Report No. 11-2.
ERIC Educational Resources Information Center
Lindahl, William H.; Gardner, James H.
Computer simulation is recognized as a valuable systems analysis research tool which enables the detailed examination, evaluation, and manipulation, under stated conditions, of a system without direct action on the system. This technique provides management with quantitative data on system performance and capabilities which can be used to compare…
Temperature control simulation for a microwave transmitter cooling system. [deep space network
NASA Technical Reports Server (NTRS)
Yung, C. S.
1980-01-01
The thermal performance of a temperature control system for the antenna microwave transmitter (klystron tube) of the Deep Space Network antenna tracking system is discussed. In particular the mathematical model is presented along with the details of a computer program which is written for the system simulation and the performance parameterization. Analytical expressions are presented.
Computational Investigation of Fluidic Counterflow Thrust Vectoring
NASA Technical Reports Server (NTRS)
Hunter, Craig A.; Deere, Karen A.
1999-01-01
A computational study of fluidic counterflow thrust vectoring has been conducted. Two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. For validation, computational results were compared to experimental data obtained at the NASA Langley Jet Exit Test Facility. In general, computational results were in good agreement with experimental performance data, indicating that efficient thrust vectoring can be obtained with low secondary flow requirements (less than 1% of the primary flow). An examination of the computational flowfield has revealed new details about the generation of a countercurrent shear layer, its relation to secondary suction, and its role in thrust vectoring. In addition to providing new information about the physics of counterflow thrust vectoring, this work appears to be the first documented attempt to simulate the counterflow thrust vectoring problem using computational fluid dynamics.
NASA Astrophysics Data System (ADS)
Babaev, A. A.; Pivovarov, Yu L.
2010-04-01
Resonant coherent excitation (RCE) of relativistic hydrogen-like ions is investigated by computer simulations methods. The suggested theoretical model is applied to the simulations of recent experiments on RCE of 390 MeV/u Ar17+ ions under (220) planar channeling in a Si crystal performed by T.Azuma et al at HIMAC (Tokyo). Theoretical results are in a good agreement with these experimental data and clearly show the appearance of the doublet structure of RCE peaks. The simulations are also extended to greater ion energies in order to predict the new RCE features at the future accelerator facility FAIR OSI and as an example, RCE of II GeV/u U91+ ions is considered in detail.
NASA Astrophysics Data System (ADS)
Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin
2017-06-01
We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Tsoukias, Nikolaos M; Goldman, Daniel; Vadapalli, Arjun; Pittman, Roland N; Popel, Aleksander S
2007-10-21
A detailed computational model is developed to simulate oxygen transport from a three-dimensional (3D) microvascular network to the surrounding tissue in the presence of hemoglobin-based oxygen carriers. The model accounts for nonlinear O(2) consumption, myoglobin-facilitated diffusion and nonlinear oxyhemoglobin dissociation in the RBCs and plasma. It also includes a detailed description of intravascular resistance to O(2) transport and is capable of incorporating realistic 3D microvascular network geometries. Simulations in this study were performed using a computer-generated microvascular architecture that mimics morphometric parameters for the hamster cheek pouch retractor muscle. Theoretical results are presented next to corresponding experimental data. Phosphorescence quenching microscopy provided PO(2) measurements at the arteriolar and venular ends of capillaries in the hamster retractor muscle before and after isovolemic hemodilution with three different hemodilutents: a non-oxygen-carrying plasma expander and two hemoglobin solutions with different oxygen affinities. Sample results in a microvascular network show an enhancement of diffusive shunting between arterioles, venules and capillaries and a decrease in hemoglobin's effectiveness for tissue oxygenation when its affinity for O(2) is decreased. Model simulations suggest that microvascular network anatomy can affect the optimal hemoglobin affinity for reducing tissue hypoxia. O(2) transport simulations in realistic representations of microvascular networks should provide a theoretical framework for choosing optimal parameter values in the development of hemoglobin-based blood substitutes.
Novel 3D/VR interactive environment for MD simulations, visualization and analysis.
Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P
2014-12-18
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.
Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis
Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.
2014-01-01
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300
A real time Pegasus propulsion system model for VSTOL piloted simulation evaluation
NASA Technical Reports Server (NTRS)
Mihaloew, J. R.; Roth, S. P.; Creekmore, R.
1981-01-01
A real time propulsion system modeling technique suitable for use in man-in-the-loop simulator studies was developd. This technique provides the system accuracy, stability, and transient response required for integrated aircraft and propulsion control system studies. A Pegasus-Harrier propulsion system was selected as a baseline for developing mathematical modeling and simulation techniques for VSTOL. Initially, static and dynamic propulsion system characteristics were modeled in detail to form a nonlinear aerothermodynamic digital computer simulation of a Pegasus engine. From this high fidelity simulation, a real time propulsion model was formulated by applying a piece-wise linear state variable methodology. A hydromechanical and water injection control system was also simulated. The real time dynamic model includes the detail and flexibility required for the evaluation of critical control parameters and propulsion component limits over a limited flight envelope. The model was programmed for interfacing with a Harrier aircraft simulation. Typical propulsion system simulation results are presented.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
Identification of cost effective energy conservation measures
NASA Technical Reports Server (NTRS)
Bierenbaum, H. S.; Boggs, W. H.
1978-01-01
In addition to a successful program of readily implemented conservation actions for reducing building energy consumption at Kennedy Space Center, recent detailed analyses have identified further substantial savings for buildings representative of technical facilities designed when energy costs were low. The techniques employed for determination of these energy savings consisted of facility configuration analysis, power and lighting measurements, detailed computer simulations and simulation verifications. Use of these methods resulted in identification of projected energy savings as large as $330,000 a year (approximately two year break-even period) in a single building. Application of these techniques to other commercial buildings is discussed
3D numerical simulation of transient processes in hydraulic turbines
NASA Astrophysics Data System (ADS)
Cherny, S.; Chirkov, D.; Bannikov, D.; Lapin, V.; Skorospelov, V.; Eshkunova, I.; Avdushenko, A.
2010-08-01
An approach for numerical simulation of 3D hydraulic turbine flows in transient operating regimes is presented. The method is based on a coupled solution of incompressible RANS equations, runner rotation equation, and water hammer equations. The issue of setting appropriate boundary conditions is considered in detail. As an illustration, the simulation results for runaway process are presented. The evolution of vortex structure and its effect on computed runaway traces are analyzed.
Decoupled 1D/3D analysis of a hydraulic valve
NASA Astrophysics Data System (ADS)
Mehring, Carsten; Zopeya, Ashok; Latham, Matt; Ihde, Thomas; Massie, Dan
2014-10-01
Analysis approaches during product development of fluid valves and other aircraft fluid delivery components vary greatly depending on the development stage. Traditionally, empirical or simplistic one-dimensional tools are being deployed during preliminary design, whereas detailed analysis such as CFD (Computational Fluid Dynamics) tools are used to refine a selected design during the detailed design stage. In recent years, combined 1D/3D co-simulation has been deployed specifically for system level simulations requiring an increased level of analysis detail for one or more components. The present paper presents a decoupled 1D/3D analysis approach where 3D CFD analysis results are utilized to enhance the fidelity of a dynamic 1D modelin context of an aircraft fuel valve.
Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Matzen, M. Keith
2014-09-16
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less
Good coupling for the multiscale patch scheme on systems with microscale heterogeneity
NASA Astrophysics Data System (ADS)
Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.
2017-05-01
Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.
Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji
2012-07-01
With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
Computational Modeling of the Dolphin Kick in Competitive Swimming
NASA Astrophysics Data System (ADS)
Loebbeck, A.; Mark, R.; Bhanot, G.
2005-11-01
Numerical simulations are being used to study the fluid dynamics of the dolphin kick in competitive swimming. This stroke is performed underwater after starts and turns and involves an undulatory motion of the body. Highly detailed laser body scans of elite swimmers are used and the kinematics of the dolphin kick is recreated from videos of Olympic level swimmers. We employ a parallelized immersed boundary method to simulate the flow associated with this stroke in all its complexity. The simulations provide a first of its kind glimpse of the fluid and vortex dynamics associated with this stroke and hydrodynamic force computations allow us to gain a better understanding of the thrust producing mechanisms.
The NASA computer aided design and test system
NASA Technical Reports Server (NTRS)
Gould, J. M.; Juergensen, K.
1973-01-01
A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing.
Kuiper, L.K.
1985-01-01
A numerical code is documented for the simulation of variable density time dependent groundwater flow in three dimensions. The groundwater density, although variable with distance, is assumed to be constant in time. The Integrated Finite Difference grid elements in the code follow the geologic strata in the modeled area. If appropriate, the determination of hydraulic head in confining beds can be deleted to decrease computation time. The strongly implicit procedure (SIP), successive over-relaxation (SOR), and eight different preconditioned conjugate gradient (PCG) methods are used to solve the approximating equations. The use of the computer program that performs the calculations in the numerical code is emphasized. Detailed instructions are given for using the computer program, including input data formats. An example simulation and the Fortran listing of the program are included. (USGS)
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2002-01-01
Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.
Computational Aerodynamic Simulations of a Spacecraft Cabin Ventilation Fan Design
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2010-01-01
Quieter working environments for astronauts are needed if future long-duration space exploration missions are to be safe and productive. Ventilation and payload cooling fans are known to be dominant sources of noise, with the International Space Station being a good case in point. To address this issue cost effectively, early attention to fan design, selection, and installation has been recommended, leading to an effort by NASA to examine the potential for small-fan noise reduction by improving fan aerodynamic design. As a preliminary part of that effort, the aerodynamics of a cabin ventilation fan designed by Hamilton Sundstrand has been simulated using computational fluid dynamics codes, and the computed solutions analyzed to quantify various aspects of the fan aerodynamics and performance. Four simulations were performed at the design rotational speed: two at the design flow rate and two at off-design flow rates. Following a brief discussion of the computational codes, various aerodynamic- and performance-related quantities derived from the computed flow fields are presented along with relevant flow field details. The results show that the computed fan performance is in generally good agreement with stated design goals.
The role of water molecules in computational drug design.
de Beer, Stephanie B A; Vermeulen, Nico P E; Oostenbrink, Chris
2010-01-01
Although water molecules are small and only consist of two different atom types, they play various roles in cellular systems. This review discusses their influence on the binding process between biomacromolecular targets and small molecule ligands and how this influence can be modeled in computational drug design approaches. Both the structure and the thermodynamics of active site waters will be discussed as these influence the binding process significantly. Structurally conserved waters cannot always be determined experimentally and if observed, it is not clear if they will be replaced upon ligand binding, even if sufficient space is available. Methods to predict the presence of water in protein-ligand complexes will be reviewed. Subsequently, we will discuss methods to include water in computational drug research. Either as an additional factor in automated docking experiments, or explicitly in detailed molecular dynamics simulations, the effect of water on the quality of the simulations is significant, but not easily predicted. The most detailed calculations involve estimates of the free energy contribution of water molecules to protein-ligand complexes. These calculations are computationally demanding, but give insight in the versatility and importance of water in ligand binding.
NASA Astrophysics Data System (ADS)
Sureshkumar, B.; Mary, Y. Sheena; Resmi, K. S.; Panicker, C. Yohannan; Armaković, Stevan; Armaković, Sanja J.; Van Alsenoy, C.; Narayana, B.; Suma, S.
2018-03-01
Two 8-hydroxyquinoline derivatives, 5,7-dichloro-8-hydroxyquinoline (57DC8HQ) and 5-chloro-7-iodo-8-hydroxy quinoline (5CL7I8HQ) have been investigated in details by means of spectroscopic characterization and computational molecular modelling techniques. FT-IR and FT-Raman experimental spectroscopic approaches have been utilized in order to obtain detailed spectroscopic signatures of title compounds, while DFT calculations have been used in order to visualize and assign vibrations. The computed values of dipole moment, polarizability and hyperpolarizability indicate that the title molecules exhibit NLO properties. The evaluated HOMO and LUMO energies demonstrate the chemical stability of the molecules. NBO analysis is made to study the stability of the molecules arising from hyperconjugative interactions and charge delocalization. DFT calculations have been also used jointly with MD simulations in order to investigate in details global and local reactivity properties of title compounds. Also, molecular docking has been also used in order to investigate affinity of title compounds against decarboxylase inhibitor and quinoline derivatives can be a lead compounds for developing new antiparkinsonian drug.
Large-eddy simulations of compressible convection on massively parallel computers. [stellar physics
NASA Technical Reports Server (NTRS)
Xie, Xin; Toomre, Juri
1993-01-01
We report preliminary implementation of the large-eddy simulation (LES) technique in 2D simulations of compressible convection carried out on the CM-2 massively parallel computer. The convective flow fields in our simulations possess structures similar to those found in a number of direct simulations, with roll-like flows coherent across the entire depth of the layer that spans several density scale heights. Our detailed assessment of the effects of various subgrid scale (SGS) terms reveals that they may affect the gross character of convection. Yet, somewhat surprisingly, we find that our LES solutions, and another in which the SGS terms are turned off, only show modest differences. The resulting 2D flows realized here are rather laminar in character, and achieving substantial turbulence may require stronger forcing and less dissipation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
Fluid Simulation in the Movies: Navier and Stokes Must Be Circulating in Their Graves
NASA Astrophysics Data System (ADS)
Tessendorf, Jerry
2010-11-01
Fluid simulations based on the Incompressible Navier-Stokes equations are commonplace computer graphics tools in the visual effects industry. These simulations mostly come from custom C++ code written by the visual effects companies. Their significant impact in films was recognized in 2008 with Academy Awards to four visual effects companies for their technical achievement. However artists are not fluid dynamicists, and fluid dynamics simulations are expensive to use in a deadline-driven production environment. As a result, the simulation algorithms are modified to limit the computational resources, adapt them to production workflow, and to respect the client's vision of the film plot. Eulerian solvers on fixed rectangular grids use a mix of momentum solvers, including Semi-Lagrangian, FLIP, and QUICK. Incompressibility is enforced with FFT, Conjugate Gradient, and Multigrid methods. For liquids, a levelset field tracks the free surface. Smooth Particle Hydrodynamics is also used, and is part of a hybrid Eulerian-SPH liquid simulator. Artists use all of them in a mix and match fashion to control the appearance of the simulation. Specially designed forces and boundary conditions control the flow. The simulation can be an input to artistically driven procedural particle simulations that enhance the flow with more detail and drama. Post-simulation processing increases the visual detail beyond the grid resolution. Ultimately, iterative simulation methods that fit naturally in the production workflow are extremely desirable but not yet successful. Results from some efforts for iterative methods are shown, and other approaches motivated by the history of production are proposed.
Pipe Flow Simulation Software: A Team Approach to Solve an Engineering Education Problem.
ERIC Educational Resources Information Center
Engel, Renata S.; And Others
1996-01-01
A computer simulation program for use in the study of fluid mechanics is described. The package is an interactive tool to explore the fluid flow characteristics of a pipe system by manipulating the physical construction of the system. The motivation, software design requirements, and specific details on how its objectives were met are presented.…
Integrated Multiscale Modeling of Molecular Computing Devices. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tim Schulze
2012-11-01
The general theme of this research has been to expand the capabilities of a simulation technique, Kinetic Monte Carlo (KMC) and apply it to study self-assembled nano-structures on epitaxial thin films. KMC simulates thin film growth and evolution by replacing the detailed dynamics of the system's evolution, which might otherwise be studied using molecular dynamics, with an appropriate stochastic process.
Simulation of Nuclear Reactor Kinetics by the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gomin, E. A.; Davidenko, V. D.; Zinchenko, A. S.; Kharchenko, I. K.
2017-12-01
The KIR computer code intended for calculations of nuclear reactor kinetics using the Monte Carlo method is described. The algorithm implemented in the code is described in detail. Some results of test calculations are given.
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
Computational analysis of Variable Thrust Engine (VTE) performance
NASA Technical Reports Server (NTRS)
Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.
1993-01-01
The Variable Thrust Engine (VTE) of the Orbital Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The performance of the VTE depends on a number of complex interacting phenomena such as atomization, spray dynamics, vaporization, turbulent mixing, convective/radiative heat transfer, and hypergolic combustion. This study involved the development of a comprehensive numerical methodology to facilitate detailed analysis of the VTE. An existing Computational Fluid Dynamics (CFD) code was extensively modified to include the following models: a two-liquid, two-phase Eulerian-Lagrangian spray model; a chemical equilibrium model; and a discrete ordinate radiation heat transfer model. The modified code was used to conduct a series of simulations to assess the effects of various physical phenomena and boundary conditions on the VTE performance. The details of the models and the results of the simulations are presented.
Simulation of cold magnetized plasmas with the 3D electromagnetic software CST Microwave Studio®
NASA Astrophysics Data System (ADS)
Louche, Fabrice; Křivská, Alena; Messiaen, André; Wauters, Tom
2017-10-01
Detailed designs of ICRF antennas were made possible by the development of sophisticated commercial 3D codes like CST Microwave Studio® (MWS). This program allows for very detailed geometries of the radiating structures, but was only considering simple materials like equivalent isotropic dielectrics to simulate the reflection and the refraction of RF waves at the vacuum/plasma interface. The code was nevertheless used intensively, notably for computing the coupling properties of the ITER ICRF antenna. Until recently it was not possible to simulate gyrotropic medias like magnetized plasmas, but recent improvements have allowed programming any material described by a general dielectric or/and diamagnetic tensor. A Visual Basic macro was developed to exploit this feature and was tested for the specific case of a monochromatic plane wave propagating longitudinally with respect to the magnetic field direction. For specific cases the exact solution can be expressed in 1D as the sum of two circularly polarized waves connected by a reflection coefficient that can be analytically computed. Solutions for stratified media can also be derived. This allows for a direct comparison with MWS results. The agreement is excellent but accurate simulations for realistic geometries require large memory resources that could significantly restrict the possibility of simulating cold plasmas to small-scale machines.
NASA Astrophysics Data System (ADS)
Klimczak, Marcin; Bojarski, Jacek; Ziembicki, Piotr; Kęskiewicz, Piotr
2017-11-01
The requirements concerning energy performance of buildings and their internal installations, particularly HVAC systems, have been growing continuously in Poland and all over the world. The existing, traditional calculation methods following from the static heat exchange model are frequently not sufficient for a reasonable heating design of a building. Both in Poland and elsewhere in the world, methods and software are employed which allow a detailed simulation of the heating and moisture conditions in a building, and also an analysis of the performance of HVAC systems within a building. However, these systems are usually difficult in use and complex. In addition, the development of a simulation model that is sufficiently adequate to the real building requires considerable time involvement of a designer, is time-consuming and laborious. A simplification of the simulation model of a building renders it possible to reduce the costs of computer simulations. The paper analyses in detail the effect of introducing a number of different variants of the simulation model developed in Design Builder on the quality of final results obtained. The objective of this analysis is to find simplifications which allow obtaining simulation results which have an acceptable level of deviations from the detailed model, thus facilitating a quick energy performance analysis of a given building.
NASA Astrophysics Data System (ADS)
Schafhirt, S.; Kaufer, D.; Cheng, P. W.
2014-12-01
In recent years many advanced load simulation tools, allowing an aero-servo-hydroelastic analyses of an entire offshore wind turbine, have been developed and verified. Nowadays, even an offshore wind turbine with a complex support structure such as a jacket can be analysed. However, the computational effort rises significantly with an increasing level of details. This counts especially for offshore wind turbines with lattice support structures, since those models do naturally have a higher number of nodes and elements than simpler monopile structures. During the design process multiple load simulations are demanded to obtain an optimal solution. In the view of pre-design tasks it is crucial to apply load simulations which keep the simulation quality and the computational effort in balance. The paper will introduce a reference wind turbine model consisting of the REpower5M wind turbine and a jacket support structure with a high level of detail. In total twelve variations of this reference model are derived and presented. Main focus is to simplify the models of the support structure and the foundation. The reference model and the simplified models are simulated with the coupled simulation tool Flex5-Poseidon and analysed regarding frequencies, fatigue loads, and ultimate loads. A model has been found which reaches an adequate increase of simulation speed while holding the results in an acceptable range compared to the reference results.
Evaluation of Enthalpy Diagrams for NH3-H2O Absorption Refrigerator
NASA Astrophysics Data System (ADS)
Takei, Toshitaka; Saito, Kiyoshi; Kawai, Sunao
The protection of environment is becoming a grave problem nowadays and an absorption refrigerator, which does not use fleon as a refrigerant, is acquiring a close attention. Among the absorption refrigerators, a number of ammonia-water absorption refrigerators are being used in realm such as refrigeration and ice accumulation, since this type of refrigerator can produce below zero degree products. It is essential to conduct an investigation on the characteristics of ammonia-water absorption refrigerator in detail by means of computer simulation in order to realize low cost, highly efficient operation. Unfortunately, there have been number of problems in order to conduct computer simulations. Firstly, Merkel's achievements of enthalpy diagram does not give the relational equations. And secondly, although relational equation are being proposed by Ziegler, simpler equations that can be applied to computer simulation are yet to be proposed. In this research, simper equations based on Ziegler's equations have been derived to make computer simulation concerning the performance of ammonia-water absorption refrigerator possible-Both results of computer simulations using simple equations and Merkel's enthalpy diagram respectively, have been compared with the actual experimental data of one staged ammonia-water absorption refrigerator. Consequently, it is clarified that the results from Ziegler's equations agree with experimental data better than those from Merkel's enthalpy diagram.
NASA Technical Reports Server (NTRS)
Rochelle, W. C.; Liu, D. K.; Nunnery, W. J., Jr.; Brandli, A. E.
1975-01-01
This paper describes the application of the SINDA (systems improved numerical differencing analyzer) computer program to simulate the operation of the NASA/JSC MIUS integration and subsystems test (MIST) laboratory. The MIST laboratory is designed to test the integration capability of the following subsystems of a modular integrated utility system (MIUS): (1) electric power generation, (2) space heating and cooling, (3) solid waste disposal, (4) potable water supply, and (5) waste water treatment. The SINDA/MIST computer model is designed to simulate the response of these subsystems to externally impressed loads. The computer model determines the amount of recovered waste heat from the prime mover exhaust, water jacket and oil/aftercooler and from the incinerator. This recovered waste heat is used in the model to heat potable water, for space heating, absorption air conditioning, waste water sterilization, and to provide for thermal storage. The details of the thermal and fluid simulation of MIST including the system configuration, modes of operation modeled, SINDA model characteristics and the results of several analyses are described.
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
Bednar, James A.
2008-01-01
Many neural regions are arranged into two-dimensional topographic maps, such as the retinotopic maps in mammalian visual cortex. Computational simulations have led to valuable insights about how cortical topography develops and functions, but further progress has been hindered by the lack of appropriate tools. It has been particularly difficult to bridge across levels of detail, because simulators are typically geared to a specific level, while interfacing between simulators has been a major technical challenge. In this paper, we show that the Python-based Topographica simulator makes it straightforward to build systems that cross levels of analysis, as well as providing a common framework for evaluating and comparing models implemented in other simulators. These results rely on the general-purpose abstractions around which Topographica is designed, along with the Python interfaces becoming available for many simulators. In particular, we present a detailed, general-purpose example of how to wrap an external spiking PyNN/NEST simulation as a Topographica component using only a dozen lines of Python code, making it possible to use any of the extensive input presentation, analysis, and plotting tools of Topographica. Additional examples show how to interface easily with models in other types of simulators. Researchers simulating topographic maps externally should consider using Topographica's analysis tools (such as preference map, receptive field, or tuning curve measurement) to compare results consistently, and for connecting models at different levels. This seamless interoperability will help neuroscientists and computational scientists to work together to understand how neurons in topographic maps organize and operate. PMID:19352443
Simulation of existing gas-fuelled conventional steam power plant using Cycle Tempo
NASA Astrophysics Data System (ADS)
Jamel, M. S.; Abd Rahman, A.; Shamsuddin, A. H.
2013-06-01
Simulation of a 200 MW gas-fuelled conventional steam power plant located in Basra, Iraq was carried out. The thermodynamic performance of the considered power plant is estimated by a system simulation. A flow-sheet computer program, "Cycle-Tempo" is used for the study. The plant components and piping systems were considered and described in detail. The simulation results were verified against data gathered from the log sheet obtained from the station during its operation hours and good results were obtained. Operational factors like the stack exhaust temperature and excess air percentage were studied and discussed, as were environmental factors, such as ambient air temperature and water inlet temperature. In addition, detailed exergy losses were illustrated and describe the temperature profiles for the main plant components. The results prompted many suggestions for improvement of the plant performance.
Mereghetti, Paolo; Wade, Rebecca C
2012-07-26
High macromolecular concentrations are a distinguishing feature of living organisms. Understanding how the high concentration of solutes affects the dynamic properties of biological macromolecules is fundamental for the comprehension of biological processes in living systems. In this paper, we describe the implementation of mean field models of translational and rotational hydrodynamic interactions into an atomically detailed many-protein brownian dynamics simulation method. Concentrated solutions (30-40% volume fraction) of myoglobin, hemoglobin A, and sickle cell hemoglobin S were simulated, and static structure factors, oligomer formation, and translational and rotational self-diffusion coefficients were computed. Good agreement of computed properties with available experimental data was obtained. The results show the importance of both solvent mediated interactions and weak protein-protein interactions for accurately describing the dynamics and the association properties of concentrated protein solutions. Specifically, they show a qualitative difference in the translational and rotational dynamics of the systems studied. Although the translational diffusion coefficient is controlled by macromolecular shape and hydrodynamic interactions, the rotational diffusion coefficient is affected by macromolecular shape, direct intermolecular interactions, and both translational and rotational hydrodynamic interactions.
Virtual reality neurosurgery: a simulator blueprint.
Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J
2004-04-01
This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.
Transportation Analysis and Simulation System Requirements
DOT National Transportation Integrated Search
1973-04-01
This document provides: : a. A brief summary of overall project (PPA OS223) accomplishments during FY 72. : b. A detailed summary of the following two major FY 72 activities: : 1. Analysis of TSC's computation resources and their utilization; : 2. Pr...
Economic assessment photovoltaic/battery systems
NASA Astrophysics Data System (ADS)
Day, J. T.; Hayes, T. P.; Hobbs, W. J.
1981-02-01
The economics of residential PV/battery systems were determined from the utility perspective using detailed computer simulation to determine marginal costs. Brief consideration is also given to the economics of customer ownership, utility distribution system impact, and the implications of PURPA.
The Layer-Oriented Approach to Declarative Languages for Biological Modeling
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language. PMID:22615554
The layer-oriented approach to declarative languages for biological modeling.
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language.
Sorensen, Mads Solvsten; Mosegaard, Jesper; Trier, Peter
2009-06-01
Existing virtual simulators for middle ear surgery are based on 3-dimensional (3D) models from computed tomographic or magnetic resonance imaging data in which image quality is limited by the lack of detail (maximum, approximately 50 voxels/mm3), natural color, and texture of the source material.Virtual training often requires the purchase of a program, a customized computer, and expensive peripherals dedicated exclusively to this purpose. The Visible Ear freeware library of digital images from a fresh-frozen human temporal bone was segmented, and real-time volume rendered as a 3D model of high-fidelity, true color, and great anatomic detail and realism of the surgically relevant structures. A haptic drilling model was developed for surgical interaction with the 3D model. Realistic visualization in high-fidelity (approximately 125 voxels/mm3) and true color, 2D, or optional anaglyph stereoscopic 3D was achieved on a standard Core 2 Duo personal computer with a GeForce 8,800 GTX graphics card, and surgical interaction was provided through a relatively inexpensive (approximately $2,500) Phantom Omni haptic 3D pointing device. This prototype is published for download (approximately 120 MB) as freeware at http://www.alexandra.dk/ves/index.htm.With increasing personal computer performance, future versions may include enhanced resolution (up to 8,000 voxels/mm3) and realistic interaction with deformable soft tissue components such as skin, tympanic membrane, dura, and cholesteatomas-features some of which are not possible with computed tomographic-/magnetic resonance imaging-based systems.
Analysis of Gas-Particle Flows through Multi-Scale Simulations
NASA Astrophysics Data System (ADS)
Gu, Yile
Multi-scale structures are inherent in gas-solid flows, which render the modeling efforts challenging. On one hand, detailed simulations where the fine structures are resolved and particle properties can be directly specified can account for complex flow behaviors, but they are too computationally expensive to apply for larger systems. On the other hand, coarse-grained simulations demand much less computations but they necessitate constitutive models which are often not readily available for given particle properties. The present study focuses on addressing this issue, as it seeks to provide a general framework through which one can obtain the required constitutive models from detailed simulations. To demonstrate the viability of this general framework in which closures can be proposed for different particle properties, we focus on the van der Waals force of interaction between particles. We start with Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations where the fine structures are resolved and van der Waals force between particles can be directly specified, and obtain closures for stress and drag that are required for coarse-grained simulations. Specifically, we develop a new cohesion model that appropriately accounts for van der Waals force between particles to be used for CFD-DEM simulations. We then validate this cohesion model and the CFD-DEM approach by showing that it can qualitatively capture experimental results where the addition of small particles to gas fluidization reduces bubble sizes. Based on the DEM and CFD-DEM simulation results, we propose stress models that account for the van der Waals force between particles. Finally, we apply machine learning, specifically neural networks, to obtain a drag model that captures the effects from fine structures and inter-particle cohesion. We show that this novel approach using neural networks, which can be readily applied for other closures other than drag here, can take advantage of the large amount of data generated from simulations, and therefore offer superior modeling performance over traditional approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent Andrew
2016-04-13
The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationallymore » simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.« less
Simulation tools for particle-based reaction-diffusion dynamics in continuous space
2014-01-01
Particle-based reaction-diffusion algorithms facilitate the modeling of the diffusional motion of individual molecules and the reactions between them in cellular environments. A physically realistic model, depending on the system at hand and the questions asked, would require different levels of modeling detail such as particle diffusion, geometrical confinement, particle volume exclusion or particle-particle interaction potentials. Higher levels of detail usually correspond to increased number of parameters and higher computational cost. Certain systems however, require these investments to be modeled adequately. Here we present a review on the current field of particle-based reaction-diffusion software packages operating on continuous space. Four nested levels of modeling detail are identified that capture incrementing amount of detail. Their applicability to different biological questions is discussed, arching from straight diffusion simulations to sophisticated and expensive models that bridge towards coarse grained molecular dynamics. PMID:25737778
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-09-04
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-06-01
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, J.; Mowrey, J.
1995-12-01
This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU systemmore » was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants.« less
'Towers in the Tempest' Computer Animation Submission
NASA Technical Reports Server (NTRS)
Shirah, Greg
2008-01-01
The following describes a computer animation that has been submitted to the ACM/SIGGRAPH 2008 computer graphics conference: 'Towers in the Tempest' clearly communicates recent scientific research into how hurricanes intensify. This intensification can be caused by a phenomenon called a 'hot tower.' For the first time, research meteorologists have run complex atmospheric simulations at a very fine temporal resolution of 3 minutes. Combining this simulation data with satellite observations enables detailed study of 'hot towers.' The science of 'hot towers' is described using: satellite observation data, conceptual illustrations, and a volumetric atmospheric simulation data. The movie starts by showing a 'hot tower' observed by NASA's Tropical Rainfall Measuring Mission (TRMM) spacecraft's three dimensional precipitation radar data of Hurricane Bonnie. Next, the dynamics of a hurricane and the formation of 'hot towers' are briefly explained using conceptual illustrations. Finally, volumetric cloud, wind, and vorticity data from a supercomputer simulation of Hurricane Bonnie are shown using volume techniques such as ray marching.
1982-12-01
a computer program which simulates the PATRIOT battalion UH1F communication system. *.-.The detailed description of how the model performs this...the Degree of Master of Science .AI . j tf ti on-i by 5 , .... . :it Lard/or Gregory H. Swanson DLt Captain USA Graduate Computer Science I...5 Model Application..... . . . .. .. . . .. .. . . 6 Thesnis Overviev ....... o.000000000000000000000. .6 Previous Studies
ECON-KG: A Code for Computation of Electrical Conductivity Using Density Functional Theory
2017-10-01
is presented. Details of the implementation and instructions for execution are presented, and an example calculation of the frequency- dependent ...shown to depend on carbon content,3 and electrical conductivity models have become a requirement for input into continuum-level simulations being... dependent electrical conductivity is computed as a weighted sum over k-points: () = ∑ () ∗ () , (2) where W(k) is
NASA Technical Reports Server (NTRS)
Hesser, R. J.; Gershman, R.
1975-01-01
A valve opening-response problem encountered during development of a control valve for the Skylab thruster attitude control system (TACS) is described. The problem involved effects of dynamic interaction among valves in the quad-redundant valve package. Also described is a detailed computer simulation of the quad-valve package which was helpful in resolving the problem.
Introducing DeBRa: a detailed breast model for radiological studies
NASA Astrophysics Data System (ADS)
Ma, Andy K. W.; Gunn, Spencer; Darambara, Dimitra G.
2009-07-01
Currently, x-ray mammography is the method of choice in breast cancer screening programmes. As the mammography technology moves from 2D imaging modalities to 3D, conventional computational phantoms do not have sufficient detail to support the studies of these advanced imaging systems. Studies of these 3D imaging systems call for a realistic and sophisticated computational model of the breast. DeBRa (Detailed Breast model for Radiological studies) is the most advanced, detailed, 3D computational model of the breast developed recently for breast imaging studies. A DeBRa phantom can be constructed to model a compressed breast, as in film/screen, digital mammography and digital breast tomosynthesis studies, or a non-compressed breast as in positron emission mammography and breast CT studies. Both the cranial-caudal and mediolateral oblique views can be modelled. The anatomical details inside the phantom include the lactiferous duct system, the Cooper ligaments and the pectoral muscle. The fibroglandular tissues are also modelled realistically. In addition, abnormalities such as microcalcifications, irregular tumours and spiculated tumours are inserted into the phantom. Existing sophisticated breast models require specialized simulation codes. Unlike its predecessors, DeBRa has elemental compositions and densities incorporated into its voxels including those of the explicitly modelled anatomical structures and the noise-like fibroglandular tissues. The voxel dimensions are specified as needed by any study and the microcalcifications are embedded into the voxels so that the microcalcification sizes are not limited by the voxel dimensions. Therefore, DeBRa works with general-purpose Monte Carlo codes. Furthermore, general-purpose Monte Carlo codes allow different types of imaging modalities and detector characteristics to be simulated with ease. DeBRa is a versatile and multipurpose model specifically designed for both x-ray and γ-ray imaging studies.
Development of the KOSMS management simulation training system and its application
NASA Astrophysics Data System (ADS)
Takatsu, Yoshiki
The use of games which simulate actual corporate management has recently become more common and is now utilized in various ways for in-house corporate training courses. KOSMS (Kobe Steel Management Simulation System), a training system designed to help improve the management skills of senior management staff, is a unique management simulation training system in which the participants, using personal computers, must make decisions concerning a variety of management activities, in simulated competition with other corporations. This report outlines the KOSMS system, and describes the basic structure and detailed contents of the management simulation models, and actual application of the KOSMS management simulation training.
Parallel discrete event simulation using shared memory
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1988-01-01
With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.
A comprehensive defect data bank for no. 2 common oak lumber
Edwin L. Lucas; Leathern R.R. Catron; Leathern R.R. Catron
1973-01-01
Computer simulation of rough mill cut-up operations allows lowcost evaluation of furniture rough mill cut-up procedures. The defect data bank serves as input to such simulation programs. The data bank contains a detailed accounting of defect data taken from 637 No. 2 Common oak boards. Included is a description of each defect (location, size, and type), as well as the...
3RIP Evaluation of the Performance of the Search System Using a Realtime Simulation Technique.
ERIC Educational Resources Information Center
Lofstrom, Mats
This report describes a real-time simulation experiment to evaluate the performance of the search and editing system 3RIP, an interactive system written in the language BLISS on a DEC-10 computer. The test vehicle, preliminary test runs, and capacity test are detailed, and the following conclusions are reported: (1) 3RIP performs well up to the…
GillesPy: A Python Package for Stochastic Model Building and Simulation.
Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R
2016-09-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.
GillesPy: A Python Package for Stochastic Model Building and Simulation
Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.
2017-01-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community. PMID:28630888
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Chai, V. W.; Lascu, D.; Urbenajo, R.; Wong, P.
1978-01-01
The engineering manual provides a complete companion documentation about the structure of the main program and subroutines, the preparation of input data, the interpretation of output results, access and use of the program, and the detailed description of all the analytic, logical expressions and flow charts used in computations and program structure. A numerical example is provided and solved completely to show the sequence of computations followed. The program is carefully structured to reduce both user's time and costs without sacrificing accuracy. The user would expect a cost of CPU time of approximately $5.00 per building zone excluding printing costs. The accuracy, on the other hand, measured by deviation of simulated consumption from watt-hour meter readings, was found by many simulation tests not to exceed + or - 10 percent margin.
Computational aeroacoustics and numerical simulation of supersonic jets
NASA Technical Reports Server (NTRS)
Morris, Philip J.; Long, Lyle N.
1996-01-01
The research project has been a computational study of computational aeroacoustics algorithms and numerical simulations of the flow and noise of supersonic jets. During this study a new method for the implementation of solid wall boundary conditions for complex geometries in three dimensions has been developed. In addition, a detailed study of the simulation of the flow in and noise from supersonic circular and rectangular jets has been conducted. Extensive comparisons have been made with experimental measurements. A summary of the results of the research program are attached as the main body of this report in the form of two publications. Also, the report lists the names of the students who were supported by this grant, their degrees, and the titles of their dissertations. In addition, a list of presentations and publications made by the Principal Investigators and the research students is also included.
Generating Neuron Geometries for Detailed Three-Dimensional Simulations Using AnaMorph.
Mörschel, Konstantin; Breit, Markus; Queisser, Gillian
2017-07-01
Generating realistic and complex computational domains for numerical simulations is often a challenging task. In neuroscientific research, more and more one-dimensional morphology data is becoming publicly available through databases. This data, however, only contains point and diameter information not suitable for detailed three-dimensional simulations. In this paper, we present a novel framework, AnaMorph, that automatically generates water-tight surface meshes from one-dimensional point-diameter files. These surface triangulations can be used to simulate the electrical and biochemical behavior of the underlying cell. In addition to morphology generation, AnaMorph also performs quality control of the semi-automatically reconstructed cells coming from anatomical reconstructions. This toolset allows an extension from the classical dimension-reduced modeling and simulation of cellular processes to a full three-dimensional and morphology-including method, leading to novel structure-function interplay studies in the medical field. The developed numerical methods can further be employed in other areas where complex geometries are an essential component of numerical simulations.
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
Effective description of a 3D object for photon transportation in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Suganuma, R.; Ogawa, K.
2000-06-01
Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.
Quantum-assisted biomolecular modelling.
Harris, Sarah A; Kendon, Vivien M
2010-08-13
Our understanding of the physics of biological molecules, such as proteins and DNA, is limited because the approximations we usually apply to model inert materials are not, in general, applicable to soft, chemically inhomogeneous systems. The configurational complexity of biomolecules means the entropic contribution to the free energy is a significant factor in their behaviour, requiring detailed dynamical calculations to fully evaluate. Computer simulations capable of taking all interatomic interactions into account are therefore vital. However, even with the best current supercomputing facilities, we are unable to capture enough of the most interesting aspects of their behaviour to properly understand how they work. This limits our ability to design new molecules, to treat diseases, for example. Progress in biomolecular simulation depends crucially on increasing the computing power available. Faster classical computers are in the pipeline, but these provide only incremental improvements. Quantum computing offers the possibility of performing huge numbers of calculations in parallel, when it becomes available. We discuss the current open questions in biomolecular simulation, how these might be addressed using quantum computation and speculate on the future importance of quantum-assisted biomolecular modelling.
Simulation and visualization of face seal motion stability by means of computer generated movies
NASA Technical Reports Server (NTRS)
Etsion, I.; Auer, B. M.
1980-01-01
A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.
Simulation and visualization of face seal motion stability by means of computer generated movies
NASA Technical Reports Server (NTRS)
Etsion, I.; Auer, B. M.
1981-01-01
A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Tieleman, D Peter
2006-10-01
A key function of biological membranes is to provide mechanisms for the controlled transport of ions, nutrients, metabolites, peptides and proteins between a cell and its environment. We are using computer simulations to study several processes involved in transport. In model membranes, the distribution of small molecules can be accurately calculated; we are making progress towards understanding the factors that determine the partitioning behaviour in the inhomogeneous lipid environment, with implications for drug distribution, membrane protein folding and the energetics of voltage gating. Lipid bilayers can be simulated at a scale that is sufficiently large to study significant defects, such as those caused by electroporation. Computer simulations of complex membrane proteins, such as potassium channels and ATP-binding cassette (ABC) transporters, can give detailed information about the atomistic dynamics that form the basis of ion transport, selectivity, conformational change and the molecular mechanism of ATP-driven transport. This is illustrated in the present review with recent simulation studies of the voltage-gated potassium channel KvAP and the ABC transporter BtuCD.
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.
Bravyi-Kitaev Superfast simulation of electronic structure on a quantum computer.
Setia, Kanav; Whitfield, James D
2018-04-28
Present quantum computers often work with distinguishable qubits as their computational units. In order to simulate indistinguishable fermionic particles, it is first required to map the fermionic state to the state of the qubits. The Bravyi-Kitaev Superfast (BKSF) algorithm can be used to accomplish this mapping. The BKSF mapping has connections to quantum error correction and opens the door to new ways of understanding fermionic simulation in a topological context. Here, we present the first detailed exposition of the BKSF algorithm for molecular simulation. We provide the BKSF transformed qubit operators and report on our implementation of the BKSF fermion-to-qubits transform in OpenFermion. In this initial study of a hydrogen molecule we have compared BKSF, Jordan-Wigner, and Bravyi-Kitaev transforms under the Trotter approximation. The gate count to implement BKSF is lower than Jordan-Wigner but higher than Bravyi-Kitaev. We considered different orderings of the exponentiated terms and found lower Trotter errors than the previously reported for Jordan-Wigner and Bravyi-Kitaev algorithms. These results open the door to the further study of the BKSF algorithm for quantum simulation.
Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
Digital systems design language
NASA Technical Reports Server (NTRS)
Shiva, S. G.
1979-01-01
Digital Systems Design Language (DDL) is implemented on the SEL-32 Computer Systems. The detaileds of the language, the translator, and the simulator, and the smulator programs are given. Several example descriptions and a tutorial on hardware description languages are provided, to guide the user.
NASA Technical Reports Server (NTRS)
Gale, R. L.; Nease, A. W.; Nelson, D. J.
1978-01-01
Computer program mathematically describes complete hydraulic systems to study their dynamic performance. Program employs subroutines that simulate components of hydraulic system, which are then controlled by main program. Program is useful to engineers working with detailed performance results of aircraft, spacecraft, or similar hydraulic systems.
Automatic temperature computation for realistic IR simulation
NASA Astrophysics Data System (ADS)
Le Goff, Alain; Kersaudy, Philippe; Latger, Jean; Cathala, Thierry; Stolte, Nilo; Barillot, Philippe
2000-07-01
Polygon temperature computation in 3D virtual scenes is fundamental for IR image simulation. This article describes in detail the temperature calculation software and its current extensions, briefly presented in [1]. This software, called MURET, is used by the simulation workshop CHORALE of the French DGA. MURET is a one-dimensional thermal software, which accurately takes into account the material thermal attributes of three-dimensional scene and the variation of the environment characteristics (atmosphere) as a function of the time. Concerning the environment, absorbed incident fluxes are computed wavelength by wavelength, for each half an hour, druing 24 hours before the time of the simulation. For each polygon, incident fluxes are compsed of: direct solar fluxes, sky illumination (including diffuse solar fluxes). Concerning the materials, classical thermal attributes are associated to several layers, such as conductivity, absorption, spectral emissivity, density, specific heat, thickness and convection coefficients are taken into account. In the future, MURET will be able to simulate permeable natural materials (water influence) and vegetation natural materials (woods). This model of thermal attributes induces a very accurate polygon temperature computation for the complex 3D databases often found in CHORALE simulations. The kernel of MUET consists of an efficient ray tracer allowing to compute the history (over 24 hours) of the shadowed parts of the 3D scene and a library, responsible for the thermal computations. The great originality concerns the way the heating fluxes are computed. Using ray tracing, the flux received in each 3D point of the scene accurately takes into account the masking (hidden surfaces) between objects. By the way, this library supplies other thermal modules such as a thermal shows computation tool.
Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meisner, Robert; McCoy, Michel; Archer, Bill
2013-09-11
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools.« less
Simulation of a navigator algorithm for a low-cost GPS receiver
NASA Technical Reports Server (NTRS)
Hodge, W. F.
1980-01-01
The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elber, Ron
Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances.
Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani
2004-03-01
The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.
MODFLOW-2005 : the U.S. Geological Survey modular ground-water model--the ground-water flow process
Harbaugh, Arlen W.
2005-01-01
This report presents MODFLOW-2005, which is a new version of the finite-difference ground-water model commonly called MODFLOW. Ground-water flow is simulated using a block-centered finite-difference approach. Layers can be simulated as confined or unconfined. Flow associated with external stresses, such as wells, areal recharge, evapotranspiration, drains, and rivers, also can be simulated. The report includes detailed explanations of physical and mathematical concepts on which the model is based, an explanation of how those concepts are incorporated in the modular structure of the computer program, instructions for using the model, and details of the computer code. The modular structure consists of a MAIN Program and a series of highly independent subroutines. The subroutines are grouped into 'packages.' Each package deals with a specific feature of the hydrologic system that is to be simulated, such as flow from rivers or flow into drains, or with a specific method of solving the set of simultaneous equations resulting from the finite-difference method. Several solution methods are incorporated, including the Preconditioned Conjugate-Gradient method. The division of the program into packages permits the user to examine specific hydrologic features of the model independently. This also facilitates development of additional capabilities because new packages can be added to the program without modifying the existing packages. The input and output systems of the computer program also are designed to permit maximum flexibility. The program is designed to allow other capabilities, such as transport and optimization, to be incorporated, but this report is limited to describing the ground-water flow capability. The program is written in Fortran 90 and will run without modification on most computers that have a Fortran 90 compiler.
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
NASA Astrophysics Data System (ADS)
Mikkili, Suresh; Panda, Anup Kumar; Prattipati, Jayanthi
2015-06-01
Nowadays the researchers want to develop their model in real-time environment. Simulation tools have been widely used for the design and improvement of electrical systems since the mid twentieth century. The evolution of simulation tools has progressed in step with the evolution of computing technologies. In recent years, computing technologies have improved dramatically in performance and become widely available at a steadily decreasing cost. Consequently, simulation tools have also seen dramatic performance gains and steady cost decreases. Researchers and engineers now have the access to affordable, high performance simulation tools that were previously too cost prohibitive, except for the largest manufacturers. This work has introduced a specific class of digital simulator known as a real-time simulator by answering the questions "what is real-time simulation", "why is it needed" and "how it works". The latest trend in real-time simulation consists of exporting simulation models to FPGA. In this article, the Steps involved for implementation of a model from MATLAB to REAL-TIME are provided in detail.
NASA Technical Reports Server (NTRS)
Brown, S. C.
1973-01-01
A computer simulation of the YF-12 aircraft motions and propulsion system dynamics is presented. The propulsion system was represented in sufficient detail so that interactions between aircraft motions and the propulsion system dynamics could be investigated. Six degree-of-freedom aircraft motions together with the three-axis stability augmentation system were represented. The mixed compression inlets and their controls were represented in the started mode for a range of flow conditions up to the inlet unstart boundary. Effects of inlet moving geometry on aircraft forces and movements as well as effects of aircraft motions on the inlet behavior were simulated. The engines, which are straight subjects, were represented in the afterburning mode, with effects of changes in aircraft flight conditions included. The simulation was capable of operating in real time.
Baseline process description for simulating plutonium oxide production for precalc project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, J. A.
Savannah River National Laboratory (SRNL) started a multi-year project, the PreCalc Project, to develop a computational simulation of a plutonium oxide (PuO 2) production facility with the objective to study the fundamental relationships between morphological and physicochemical properties. This report provides a detailed baseline process description to be used by SRNL personnel and collaborators to facilitate the initial design and construction of the simulation. The PreCalc Project team selected the HB-Line Plutonium Finishing Facility as the basis for a nominal baseline process since the facility is operational and significant model validation data can be obtained. The process boundary as wellmore » as process and facility design details necessary for multi-scale, multi-physics models are provided.« less
Drach, Andrew; Khalighi, Amir H; Sacks, Michael S
2018-02-01
Multiple studies have demonstrated that the pathological geometries unique to each patient can affect the durability of mitral valve (MV) repairs. While computational modeling of the MV is a promising approach to improve the surgical outcomes, the complex MV geometry precludes use of simplified models. Moreover, the lack of complete in vivo geometric information presents significant challenges in the development of patient-specific computational models. There is thus a need to determine the level of detail necessary for predictive MV models. To address this issue, we have developed a novel pipeline for building attribute-rich computational models of MV with varying fidelity directly from the in vitro imaging data. The approach combines high-resolution geometric information from loaded and unloaded states to achieve a high level of anatomic detail, followed by mapping and parametric embedding of tissue attributes to build a high-resolution, attribute-rich computational models. Subsequent lower resolution models were then developed and evaluated by comparing the displacements and surface strains to those extracted from the imaging data. We then identified the critical levels of fidelity for building predictive MV models in the dilated and repaired states. We demonstrated that a model with a feature size of about 5 mm and mesh size of about 1 mm was sufficient to predict the overall MV shape, stress, and strain distributions with high accuracy. However, we also noted that more detailed models were found to be needed to simulate microstructural events. We conclude that the developed pipeline enables sufficiently complex models for biomechanical simulations of MV in normal, dilated, repaired states. Copyright © 2017 John Wiley & Sons, Ltd.
Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.
Resat, H; Mezei, M
1996-09-01
The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations.
Abstractions for DNA circuit design.
Lakin, Matthew R; Youssef, Simon; Cardelli, Luca; Phillips, Andrew
2012-03-07
DNA strand displacement techniques have been used to implement a broad range of information processing devices, from logic gates, to chemical reaction networks, to architectures for universal computation. Strand displacement techniques enable computational devices to be implemented in DNA without the need for additional components, allowing computation to be programmed solely in terms of nucleotide sequences. A major challenge in the design of strand displacement devices has been to enable rapid analysis of high-level designs while also supporting detailed simulations that include known forms of interference. Another challenge has been to design devices capable of sustaining precise reaction kinetics over long periods, without relying on complex experimental equipment to continually replenish depleted species over time. In this paper, we present a programming language for designing DNA strand displacement devices, which supports progressively increasing levels of molecular detail. The language allows device designs to be programmed using a common syntax and then analysed at varying levels of detail, with or without interference, without needing to modify the program. This allows a trade-off to be made between the level of molecular detail and the computational cost of analysis. We use the language to design a buffered architecture for DNA devices, capable of maintaining precise reaction kinetics for a potentially unbounded period. We test the effectiveness of buffered gates to support long-running computation by designing a DNA strand displacement system capable of sustained oscillations.
Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers
Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...
2016-01-28
Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less
IDENTIFICATION OF AN IDEAL REACTOR MODEL IN A SECONDARY COMBUSTION CHAMBER
Tracer analysis was applied to a secondary combustion chamber of a rotary kiln incinerator simulator to develop a computationally inexpensive networked ideal reactor model and allow for the later incorporation of detailed reaction mechanisms. Tracer data from sulfur dioxide trace...
Digital systems design language. Design synthesis of digital systems
NASA Technical Reports Server (NTRS)
Shiva, S. G.
1979-01-01
The Digital Systems Design Language (DDL) is implemented on the SEL-32 computer systems. The details of the language, translator and simulator programs are included. Several example descriptions and a tutorial on hardware description languages are provided, to guide the user.
The Bravyi-Kitaev transformation for quantum computation of electronic structure
NASA Astrophysics Data System (ADS)
Seeley, Jacob T.; Richard, Martin J.; Love, Peter J.
2012-12-01
Quantum simulation is an important application of future quantum computers with applications in quantum chemistry, condensed matter, and beyond. Quantum simulation of fermionic systems presents a specific challenge. The Jordan-Wigner transformation allows for representation of a fermionic operator by O(n) qubit operations. Here, we develop an alternative method of simulating fermions with qubits, first proposed by Bravyi and Kitaev [Ann. Phys. 298, 210 (2002), 10.1006/aphy.2002.6254; e-print arXiv:quant-ph/0003137v2], that reduces the simulation cost to O(log n) qubit operations for one fermionic operation. We apply this new Bravyi-Kitaev transformation to the task of simulating quantum chemical Hamiltonians, and give a detailed example for the simplest possible case of molecular hydrogen in a minimal basis. We show that the quantum circuit for simulating a single Trotter time step of the Bravyi-Kitaev derived Hamiltonian for H2 requires fewer gate applications than the equivalent circuit derived from the Jordan-Wigner transformation. Since the scaling of the Bravyi-Kitaev method is asymptotically better than the Jordan-Wigner method, this result for molecular hydrogen in a minimal basis demonstrates the superior efficiency of the Bravyi-Kitaev method for all quantum computations of electronic structure.
BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs
NASA Astrophysics Data System (ADS)
Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes
2017-06-01
Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.
NASA Astrophysics Data System (ADS)
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-01
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-28
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael
2009-01-01
This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308
Autonomous Robot Control via Autonomy Levels (ARCAL)
2015-08-21
same simulated objects. VRF includes a detailed graphical user interface (GUI) front end that subscribes to objects over HLA and renders them, along...forces.html 8. Gao, H., LI, Z., and Zhao, X., "The User -defined and Func- tion-strengthened for CGF of VR -Forces [J]." Computer Simulation, vol. 6...info Scout vehicle commands Scout vehicle Sensor measurements Mission vehicle Mission goals Operator interface Scout belief update Logistics
Autonomous Robot Control via Autonomy Levels (ARCAL)
2015-06-25
simulated objects. VRF includes a detailed graphical user interface (GUI) front end that subscribes to objects over HLA and renders them, along...forces.html 8. Gao, H., LI, Z., and Zhao, X., "The User -defined and Func- tion-strengthened for CGF of VR -Forces [J]." Computer Simulation, vol. 6, 2007...info Scout vehicle commands Scout vehicle Sensor measurements Mission vehicle Mission goals Operator interface Scout belief update Logistics executive
Blood Pump Development Using Rocket Engine Flow Simulation Technology
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, Dochan
2002-01-01
This viewgraph presentation provides information on the transfer of rocket engine flow simulation technology to work involving the development of blood pumps. Details are offered regarding the design and requirements of mechanical heart assist devices, or VADs (ventricular assist device). There are various computational fluid dynamics issues involved in the visualization of flow in such devices, and these are highlighted and compared to those of rocket turbopumps.
NASA Astrophysics Data System (ADS)
Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.
2003-09-01
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Chen, Jundong
2018-03-01
Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.
Lai, Canhai; Xu, Zhijie; Li, Tingwen; ...
2017-08-05
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber's performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered sub-grid models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable accuracymore » and manageable computational effort. Previously developed filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical tubes) on the adsorber's hydrodynamics and CO 2 capture performance are then examined. A one-dimensional three-region process model is briefly introduced for comparison purpose. The CFD model matches reasonably well with the process model while provides additional information about the flow field that is not available with the process model.« less
NASA Technical Reports Server (NTRS)
OBrien, T. Kevin (Technical Monitor); Krueger, Ronald; Minguet, Pierre J.
2004-01-01
The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to tension and three-point bending was studied. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlation of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents. In addition, the application of the submodeling technique for the simulation of skin/stringer debond was also studied. Global models made of shell elements and solid elements were studied. Solid elements were used for local submodels, which extended between three and six specimen thicknesses on either side of the delamination front to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from the simulations using the submodeling technique were not in agreement with results obtained from full solid models.
Modular use of human body models of varying levels of complexity: Validation of head kinematics.
Decker, William; Koya, Bharath; Davis, Matthew L; Gayzik, F Scott
2017-05-29
The significant computational resources required to execute detailed human body finite-element models has motivated the development of faster running, simplified models (e.g., GHBMC M50-OS). Previous studies have demonstrated the ability to modularly incorporate the validated GHBMC M50-O brain model into the simplified model (GHBMC M50-OS+B), which allows for localized analysis of the brain in a fraction of the computation time required for the detailed model. The objective of this study is to validate the head and neck kinematics of the GHBMC M50-O and M50-OS (detailed and simplified versions of the same model) against human volunteer test data in frontal and lateral loading. Furthermore, the effect of modular insertion of the detailed brain model into the M50-OS is quantified. Data from the Navy Biodynamics Laboratory (NBDL) human volunteer studies, including a 15g frontal, 8g frontal, and 7g lateral impact, were reconstructed and simulated using LS-DYNA. A five-point restraint system was used for all simulations, and initial positions of the models were matched with volunteer data using settling and positioning techniques. Both the frontal and lateral simulations were run with the M50-O, M50-OS, and M50-OS+B with active musculature for a total of nine runs. Normalized run times for the various models used in this study were 8.4 min/ms for the M50-O, 0.26 min/ms for the M50-OS, and 0.97 min/ms for the M50-OS+B, a 32- and 9-fold reduction in run time, respectively. Corridors were reanalyzed for head and T1 kinematics from the NBDL studies. Qualitative evaluation of head rotational accelerations and linear resultant acceleration, as well as linear resultant T1 acceleration, showed reasonable results between all models and the experimental data. Objective evaluation of the results for head center of gravity (CG) accelerations was completed via ISO TS 18571, and indicated scores of 0.673 (M50-O), 0.638 (M50-OS), and 0.656 (M50-OS+B) for the 15g frontal impact. Scores at lower g levels yielded similar results, 0.667 (M50-O), 0.675 (M50-OS), and 0.710 (M50-OS+B) for the 8g frontal impact. The 7g lateral simulations also compared fairly with an average ISO score of 0.565 for the M50-O, 0.634 for the M50-OS, and 0.606 for the M50-OS+B. The three HBMs experienced similar head and neck motion in the frontal simulations, but the M50-O predicted significantly greater head rotation in the lateral simulation. The greatest departure from the detailed occupant models were noted in lateral flexion, potentially indicating the need for further study. Precise modeling of the belt system however was limited by available data. A sensitivity study of these parameters in the frontal condition showed that belt slack and muscle activation have a modest effect on the ISO score. The reduction in computation time of the M50-OS+B reduces the burden of high computational requirements when handling detailed HBMs. Future work will focus on harmonizing the lateral head response of the models and studying localized injury criteria within the brain from the M50-O and M50-OS+B.
Boosting flood warning schemes with fast emulator of detailed hydrodynamic models
NASA Astrophysics Data System (ADS)
Bellos, V.; Carbajal, J. P.; Leitao, J. P.
2017-12-01
Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real world scenario.
Computational complexity of the landscape II-Cosmological considerations
NASA Astrophysics Data System (ADS)
Denef, Frederik; Douglas, Michael R.; Greene, Brian; Zukowski, Claire
2018-05-01
We propose a new approach for multiverse analysis based on computational complexity, which leads to a new family of "computational" measure factors. By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it. We argue for a principle of "limited computational complexity" governing early universe dynamics as simulated by this supercomputer, which translates to a global measure for regulating the infinities of eternal inflation. The rules for time evolution can be thought of as a search algorithm, whose details should be constrained by a stronger principle of "minimal computational complexity". Unlike previously studied global measures, ours avoids standard equilibrium considerations and the well-known problems of Boltzmann Brains and the youngness paradox. We also give various definitions of the computational complexity of a cosmology, and argue that there are only a few natural complexity classes.
Mori, Yoshikazu; Ogawa, Kazuo; Warabi, Eiji; Yamamoto, Masahiro; Hirokawa, Takatsugu
2016-01-01
Transient receptor potential vanilloid type 1 (TRPV1) is a non-selective cation channel and a multimodal sensor protein. Since the precise structure of TRPV1 was obtained by electron cryo-microscopy, the binding mode of representative agonists such as capsaicin and resiniferatoxin (RTX) has been extensively characterized; however, detailed information on the binding mode of other vanilloids remains lacking. In this study, mutational analysis of human TRPV1 was performed, and four agonists (capsaicin, RTX, [6]-shogaol and [6]-gingerol) were used to identify amino acid residues involved in ligand binding and/or modulation of proton sensitivity. The detailed binding mode of each ligand was then simulated by computational analysis. As a result, three amino acids (L518, F591 and L670) were newly identified as being involved in ligand binding and/or modulation of proton sensitivity. In addition, in silico docking simulation and a subsequent mutational study suggested that [6]-gingerol might bind to and activate TRPV1 in a unique manner. These results provide novel insights into the binding mode of various vanilloids to the channel and will be helpful in developing a TRPV1 modulator. PMID:27606946
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purposemore » of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.« less
Ligand Binding: Molecular Mechanics Calculation of the Streptavidin-Biotin Rupture Force
NASA Astrophysics Data System (ADS)
Grubmuller, Helmut; Heymann, Berthold; Tavan, Paul
1996-02-01
The force required to rupture the streptavidin-biotin complex was calculated here by computer simulations. The computed force agrees well with that obtained by recent single molecule atomic force microscope experiments. These simulations suggest a detailed multiple-pathway rupture mechanism involving five major unbinding steps. Binding forces and specificity are attributed to a hydrogen bond network between the biotin ligand and residues within the binding pocket of streptavidin. During rupture, additional water bridges substantially enhance the stability of the complex and even dominate the binding inter-actions. In contrast, steric restraints do not appear to contribute to the binding forces, although conformational motions were observed.
NASA Astrophysics Data System (ADS)
Herrick, Gregory Paul
The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.
Multi-threaded ATLAS simulation on Intel Knights Landing processors
NASA Astrophysics Data System (ADS)
Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration
2017-10-01
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
Low Thrust Orbital Maneuvers Using Ion Propulsion
NASA Astrophysics Data System (ADS)
Ramesh, Eric
2011-10-01
Low-thrust maneuver options, such as electric propulsion, offer specific challenges within mission-level Modeling, Simulation, and Analysis (MS&A) tools. This project seeks to transition techniques for simulating low-thrust maneuvers from detailed engineering level simulations such as AGI's Satellite ToolKit (STK) Astrogator to mission level simulations such as the System Effectiveness Analysis Simulation (SEAS). Our project goals are as follows: A) Assess different low-thrust options to achieve various orbital changes; B) Compare such approaches to more conventional, high-thrust profiles; C) Compare computational cost and accuracy of various approaches to calculate and simulate low-thrust maneuvers; D) Recommend methods for implementing low-thrust maneuvers in high-level mission simulations; E) prototype recommended solutions.
Patient-Specific Simulation of Cardiac Blood Flow From High-Resolution Computed Tomography.
Lantz, Jonas; Henriksson, Lilian; Persson, Anders; Karlsson, Matts; Ebbers, Tino
2016-12-01
Cardiac hemodynamics can be computed from medical imaging data, and results could potentially aid in cardiac diagnosis and treatment optimization. However, simulations are often based on simplified geometries, ignoring features such as papillary muscles and trabeculae due to their complex shape, limitations in image acquisitions, and challenges in computational modeling. This severely hampers the use of computational fluid dynamics in clinical practice. The overall aim of this study was to develop a novel numerical framework that incorporated these geometrical features. The model included the left atrium, ventricle, ascending aorta, and heart valves. The framework used image registration to obtain patient-specific wall motion, automatic remeshing to handle topological changes due to the complex trabeculae motion, and a fast interpolation routine to obtain intermediate meshes during the simulations. Velocity fields and residence time were evaluated, and they indicated that papillary muscles and trabeculae strongly interacted with the blood, which could not be observed in a simplified model. The framework resulted in a model with outstanding geometrical detail, demonstrating the feasibility as well as the importance of a framework that is capable of simulating blood flow in physiologically realistic hearts.
Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos
2017-01-01
Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459
Efficient Computation Of Behavior Of Aircraft Tires
NASA Technical Reports Server (NTRS)
Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.
1989-01-01
NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.
Enabling parallel simulation of large-scale HPC network systems
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...
2016-04-07
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Enabling parallel simulation of large-scale HPC network systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
DOT National Transportation Integrated Search
1999-02-01
This report details a project to study the relationship between highway design and human behavior as influenced by roadside environments. The project was developed in two phases. In the visualization phase, computer simulation was used to model an ac...
Quantum Molecular Dynamics Simulations of Nanotube Tip Assisted Reactions
NASA Technical Reports Server (NTRS)
Menon, Madhu
1998-01-01
In this report we detail the development and application of an efficient quantum molecular dynamics computational algorithm and its application to the nanotube-tip assisted reactions on silicon and diamond surfaces. The calculations shed interesting insights into the microscopic picture of tip surface interactions.
David A. Marquis; Richard L. Ernst
1992-01-01
Describes the purpose and function of the SILVAH computer program in general terms; provides detailed instructions on use of the program; and provides information on program organization , data formats, and the basis of processing algorithms.
Parallel discrete event simulation: A shared memory approach
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1987-01-01
With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, A.; Borland, M.
Both intra-beamscattering (IBS) and the Touschek effect become prominent formulti-bend-achromat- (MBA-) based ultra-low-emittance storage rings. To mitigate the transverse emittance degradation and obtain a reasonably long beam lifetime, a higher harmonic rf cavity (HHC) is often proposed to lengthen the bunch. The use of such a cavity results in a non-gaussian longitudinal distribution. However, common methods for computing IBS and Touschek scattering assume Gaussian distributions. Modifications have been made to several simulation codes that are part of the elegant [1] toolkit to allow these computations for arbitrary longitudinal distributions. After describing thesemodifications, we review the results of detailed simulations formore » the proposed hybrid seven-bend-achromat (H7BA) upgrade lattice [2] for the Advanced Photon Source.« less
Convergence of sampling in protein simulations
NASA Astrophysics Data System (ADS)
Hess, Berk
2002-03-01
With molecular dynamics protein dynamics can be simulated in atomic detail. Current computers are not fast enough to probe all available conformations, but fluctuations around one conformation can be sampled to a reasonable extent. The motions with the largest fluctuations can be filtered out of a simulation using covariance or principal component analysis. A problem with this analysis is that random diffusion can appear as correlated motion. An analysis is presented of how long a simulation should be to obtain relevant results for global motions. The analysis reveals that the cosine content of the principal components is a good indicator for bad sampling.
Principles of magnetohydrodynamic simulation in space plasmas
NASA Technical Reports Server (NTRS)
Sato, T.
1985-01-01
Attention is given to the philosophical as well as physical principles that are essential to the establishment of MHD simulation studies for solar plasma research, assuming the capabilities of state-of-the-art computers and emphasizing the importance of 'local' MHD simulation. Solar-terrestrial plasma space is divided into several elementary regions where a macroscopic elementary energy conversion process could conceivably occur; the local MHD simulation is defined as self-contained in each of the regions. The importance of, and the difficulties associated with, the boundary condition are discussed in detail. The roles of diagnostics and of the finite difference method are noted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardle, Kent E.; Frey, Kurt; Pereira, Candido
2014-02-02
This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
Landázuri, Andrea C.; Sáez, A. Eduardo; Anthony, T. Renée
2016-01-01
This work presents fluid flow and particle trajectory simulation studies to determine the aspiration efficiency of a horizontally oriented occupational air sampler using computational fluid dynamics (CFD). Grid adaption and manual scaling of the grids were applied to two sampler prototypes based on a 37-mm cassette. The standard k–ε model was used to simulate the turbulent air flow and a second order streamline-upwind discretization scheme was used to stabilize convective terms of the Navier–Stokes equations. Successively scaled grids for each configuration were created manually and by means of grid adaption using the velocity gradient in the main flow direction. Solutions were verified to assess iterative convergence, grid independence and monotonic convergence. Particle aspiration efficiencies determined for both prototype samplers were undistinguishable, indicating that the porous filter does not play a noticeable role in particle aspiration. Results conclude that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail. It was verified that adaptive grids provided a higher number of locations with monotonic convergence than the manual grids and required the least computational effort. PMID:26949268
Complete geometric computer simulation of a classical guitar
NASA Astrophysics Data System (ADS)
Bader, Rolf
2005-04-01
The aim of formulating a complete model of a classical guitar body as a transient-time geometry is to get detailed insight into the vibrating and coupling behavior of the time-dependent guitar system. Here, especially the evolution of the guitars initial transient can be looked at with great detail and the produced sounds from this computer implementation can be listened to. Therefore, a stand-alone software was developed to build, calculate, and visualize the guitar. The model splits the guitar body into top plate, back plate, ribs, neck, inclosed air, and strings and couples these parts together including the coupling of bending waves and in-plane waves of these plates to serve for a better understanding of the coupling between the guitar parts and between these two kinds of waves. The resulting waveforms are integrated over the geometry and the resulting sounds show up the different roles and contributions of the different guitar body parts to the guitar sound. Here cooperation with guitar makers is established, as changes on the guitars geometry on the resulting sound can be considered as computer simulation and promising new sound qualities can then be used again in real instrument production.
NASA Astrophysics Data System (ADS)
Li, Xiaoyi; Soteriou, Marios C.
2016-08-01
Recent advances in numerical methods coupled with the substantial enhancements in computing power and the advent of high performance computing have presented first principle, high fidelity simulation as a viable tool in the prediction and analysis of spray atomization processes. The credibility and potential impact of such simulations, however, has been hampered by the relative absence of detailed validation against experimental evidence. The numerical stability and accuracy challenges arising from the need to simulate the high liquid-gas density ratio across the sharp interfaces encountered in these flows are key reasons for this. In this work we challenge this status quo by presenting a numerical model able to deal with these challenges, employing it in simulations of liquid jet in crossflow atomization and performing extensive validation of its results against a carefully executed experiment with detailed measurements in the atomization region. We then proceed to the detailed analysis of the flow physics. The computational model employs the coupled level set and volume of fluid approach to directly capture the spatiotemporal evolution of the liquid-gas interface and the sharp-interface ghost fluid method to stably handle high liquid-air density ratio. Adaptive mesh refinement and Lagrangian droplet models are shown to be viable options for computational cost reduction. Moreover, high performance computing is leveraged to manage the computational cost. The experiment selected for validation eliminates the impact of inlet liquid and gas turbulence and focuses on the impact of the crossflow aerodynamic forces on the atomization physics. Validation is demonstrated by comparing column surface wavelengths, deformation, breakup locations, column trajectories and droplet sizes, velocities, and mass rates for a range of intermediate Weber numbers. Analysis of the physics is performed in terms of the instability and breakup characteristics and the features of downstream flow recirculation, and vortex shedding. Formation of "Λ" shape windward column waves is observed and explained by the combined upward and lateral surface motion. The existence of Rayleigh-Taylor instability as the primary mechanism for the windward column waves is verified for this case by comparing wavelengths from the simulations to those predicted by linear stability analyses. Physical arguments are employed to postulate that the type of instability manifested may be related to conditions such as the gas Weber number and the inlet turbulence level. The decreased column wavelength with increasing Weber number is found to cause enhanced surface stripping and early depletion of liquid core at higher Weber number. A peculiar "three-streak-two-membrane" liquid structure is identified at the lowest Weber number and explained as the consequence of the symmetric recirculation zones behind the jet column. It is found that the vortical flow downstream of the liquid column resembles a von Karman vortex street and that the coupling between the gas flow and droplet transport is weak for the conditions explored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xiaoyi, E-mail: lixy2@utrc.utc.com; Soteriou, Marios C.
Recent advances in numerical methods coupled with the substantial enhancements in computing power and the advent of high performance computing have presented first principle, high fidelity simulation as a viable tool in the prediction and analysis of spray atomization processes. The credibility and potential impact of such simulations, however, has been hampered by the relative absence of detailed validation against experimental evidence. The numerical stability and accuracy challenges arising from the need to simulate the high liquid-gas density ratio across the sharp interfaces encountered in these flows are key reasons for this. In this work we challenge this status quomore » by presenting a numerical model able to deal with these challenges, employing it in simulations of liquid jet in crossflow atomization and performing extensive validation of its results against a carefully executed experiment with detailed measurements in the atomization region. We then proceed to the detailed analysis of the flow physics. The computational model employs the coupled level set and volume of fluid approach to directly capture the spatiotemporal evolution of the liquid-gas interface and the sharp-interface ghost fluid method to stably handle high liquid-air density ratio. Adaptive mesh refinement and Lagrangian droplet models are shown to be viable options for computational cost reduction. Moreover, high performance computing is leveraged to manage the computational cost. The experiment selected for validation eliminates the impact of inlet liquid and gas turbulence and focuses on the impact of the crossflow aerodynamic forces on the atomization physics. Validation is demonstrated by comparing column surface wavelengths, deformation, breakup locations, column trajectories and droplet sizes, velocities, and mass rates for a range of intermediate Weber numbers. Analysis of the physics is performed in terms of the instability and breakup characteristics and the features of downstream flow recirculation, and vortex shedding. Formation of “Λ” shape windward column waves is observed and explained by the combined upward and lateral surface motion. The existence of Rayleigh-Taylor instability as the primary mechanism for the windward column waves is verified for this case by comparing wavelengths from the simulations to those predicted by linear stability analyses. Physical arguments are employed to postulate that the type of instability manifested may be related to conditions such as the gas Weber number and the inlet turbulence level. The decreased column wavelength with increasing Weber number is found to cause enhanced surface stripping and early depletion of liquid core at higher Weber number. A peculiar “three-streak-two-membrane” liquid structure is identified at the lowest Weber number and explained as the consequence of the symmetric recirculation zones behind the jet column. It is found that the vortical flow downstream of the liquid column resembles a von Karman vortex street and that the coupling between the gas flow and droplet transport is weak for the conditions explored.« less
Extended Operating Configuration 2 (EOC-2) Design Document
NASA Technical Reports Server (NTRS)
Barkai, David; Blaylock, Bruce T. (Technical Monitor)
1994-01-01
This document describes the design and plan of the Extended Operating Configuration 2 (EOC-2) for the Numerical Aerodynamic Simulation division (NAS). It covers the changes in the computing environment for the period of '93-'94. During this period the computation capability at NAS will have quadrupled. The first section summarizes this paper: the NAS mission is to provide, by the year 2000, a computing system capable of simulating an entire aerospace vehicle in a few hours. This will require 100 GigaFlops sustained performance. The second section contains information about the NAS user community and the computational model used for projecting future requirements. In the third section, the overall requirements are presented, followed by a summary of the target EOC-2 system. The following sections cover, in more detail, each major component that will have undergone change during EOC-2: the high speed processor, mass storage, workstations, and networks.
Surgical robot setup simulation with consistent kinematics and haptics for abdominal surgery.
Hayashibe, Mitsuhiro; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto
2005-01-01
Preoperative simulation and planning of surgical robot setup should accompany advanced robotic surgery if their advantages are to be further pursued. Feedback from the planning system will plays an essential role in computer-aided robotic surgery in addition to preoperative detailed geometric information from patient CT/MRI images. Surgical robot setup simulation systems for appropriate trocar site placement have been developed especially for abdominal surgery. The motion of the surgical robot can be simulated and rehearsed with kinematic constraints at the trocar site, and the inverse-kinematics of the robot. Results from simulation using clinical patient data verify the effectiveness of the proposed system.
SpectraPLOT, Visualization Package with a User-Friendly Graphical Interface
NASA Astrophysics Data System (ADS)
Sebald, James; Macfarlane, Joseph; Golovkin, Igor
2017-10-01
SPECT3D is a collisional-radiative spectral analysis package designed to compute detailed emission, absorption, or x-ray scattering spectra, filtered images, XRD signals, and other synthetic diagnostics. The spectra and images are computed for virtual detectors by post-processing the results of hydrodynamics simulations in 1D, 2D, and 3D geometries. SPECT3D can account for a variety of instrumental response effects so that direct comparisons between simulations and experimental measurements can be made. SpectraPLOT is a user-friendly graphical interface for viewing a wide variety of results from SPECT3D simulations, and applying various instrumental effects to the simulated images and spectra. We will present SpectraPLOT's ability to display a variety of data, including spectra, images, light curves, streaked spectra, space-resolved spectra, and drilldown plasma property plots, for an argon-doped capsule implosion experiment example. Future SpectraPLOT features and enhancements will also be discussed.
Reduction of Simulation Times for High-Q Structures using the Resonance Equation
Hall, Thomas Wesley; Bandaru, Prabhakar R.; Rees, Daniel Earl
2015-11-17
Simulating steady state performance of high quality factor (Q) resonant RF structures is computationally difficult for structures with sizes on the order of more than a few wavelengths because of the long times (on the order of ~ 0.1 ms) required to achieve steady state in comparison with maximum time step that can be used in the simulation (typically, on the order of ~ 1 ps). This paper presents analytical and computational approaches that can be used to accelerate the simulation of the steady state performance of such structures. The basis of the proposed approach is the utilization of amore » larger amplitude signal at the beginning to achieve steady state earlier relative to the nominal input signal. Finally, the methodology for finding the necessary input signal is then discussed in detail, and the validity of the approach is evaluated.« less
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.; Chamis, Christos C.
1992-01-01
The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.
Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, J.H.; Michelotti, M.D.; Riemer, N.
2016-10-01
Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.
1991-01-01
The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.
Simulation-Based Approach for Site-Specific Optimization of Hydrokinetic Turbine Arrays
NASA Astrophysics Data System (ADS)
Sotiropoulos, F.; Chawdhary, S.; Yang, X.; Khosronejad, A.; Angelidis, D.
2014-12-01
A simulation-based approach has been developed to enable site-specific optimization of tidal and current turbine arrays in real-life waterways. The computational code is based on the St. Anthony Falls Laboratory Virtual StreamLab (VSL3D), which is able to carry out high-fidelity simulations of turbulent flow and sediment transport processes in rivers and streams taking into account the arbitrary geometrical complexity characterizing natural waterways. The computational framework can be used either in turbine-resolving mode, to take into account all geometrical details of the turbine, or with the turbines parameterized as actuator disks or actuator lines. Locally refined grids are employed to dramatically increase the resolution of the simulation and enable efficient simulations of multi-turbine arrays. Turbine/sediment interactions are simulated using the coupled hydro-morphodynamic module of VSL3D. The predictive capabilities of the resulting computational framework will be demonstrated by applying it to simulate turbulent flow past a tri-frame configuration of hydrokinetic turbines in a rigid-bed turbulent open channel flow as well as turbines mounted on mobile bed open channels to investigate turbine/sediment interactions. The utility of the simulation-based approach for guiding the optimal development of turbine arrays in real-life waterways will also be discussed and demonstrated. This work was supported by NSF grant IIP-1318201. Simulations were carried out at the Minnesota Supercomputing Institute.
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustavsen, Arlid; Kohler, Christian; Dalehaug, Arvid
2008-12-01
This paper assesses the accuracy of the simplified frame cavity conduction/convection and radiation models presented in ISO 15099 and used in software for rating and labeling window products. Temperatures and U-factors for typical horizontal window frames with internal cavities are compared; results from Computational Fluid Dynamics (CFD) simulations with detailed radiation modeling are used as a reference. Four different frames were studied. Two were made of polyvinyl chloride (PVC) and two of aluminum. For each frame, six different simulations were performed, two with a CFD code and four with a building-component thermal-simulation tool using the Finite Element Method (FEM). Thismore » FEM tool addresses convection using correlations from ISO 15099; it addressed radiation with either correlations from ISO 15099 or with a detailed, view-factor-based radiation model. Calculations were performed using the CFD code with and without fluid flow in the window frame cavities; the calculations without fluid flow were performed to verify that the CFD code and the building-component thermal-simulation tool produced consistent results. With the FEM-code, the practice of subdividing small frame cavities was examined, in some cases not subdividing, in some cases subdividing cavities with interconnections smaller than five millimeters (mm) (ISO 15099) and in some cases subdividing cavities with interconnections smaller than seven mm (a breakpoint that has been suggested in other studies). For the various frames, the calculated U-factors were found to be quite comparable (the maximum difference between the reference CFD simulation and the other simulations was found to be 13.2 percent). A maximum difference of 8.5 percent was found between the CFD simulation and the FEM simulation using ISO 15099 procedures. The ISO 15099 correlation works best for frames with high U-factors. For more efficient frames, the relative differences among various simulations are larger. Temperature was also compared, at selected locations on the frames. Small differences was found in the results from model to model. Finally, the effectiveness of the ISO cavity radiation algorithms was examined by comparing results from these algorithms to detailed radiation calculations (from both programs). Our results suggest that improvements in cavity heat transfer calculations can be obtained by using detailed radiation modeling (i.e. view-factor or ray-tracing models), and that incorporation of these strategies may be more important for improving the accuracy of results than the use of CFD modeling for horizontal cavities.« less
NASA Technical Reports Server (NTRS)
Yau, A. W.; Whalen, B. A.; Harris, F. R.; Gattinger, R. L.; Pongratz, M. B.
1985-01-01
Observations of plasma depletion, ion composition modification, and airglow emissions in the Waterhole experiments are presented. The detailed ion chemistry and airglow emission processes related to the ionospheric hole formation in the experiment are examined, and observations are compared with computer simulation results. The latter indicate that the overall depletion rates in different parts of the depletion region are governed by different parameters.
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive and time consuming. One of the main contributors to the high cost and lengthy time is the need to perform many large-scale hardware tests and the inability to integrate all appropriate subsystems early in the design process. The NASA Glenn Research Center is developing the technologies required to enable simulations of full aerospace propulsion systems in sufficient detail to resolve critical design issues early in the design process before hardware is built. This concept, called the Numerical Propulsion System Simulation (NPSS), is focused on the integration of multiple disciplines such as aerodynamics, structures and heat transfer with computing and communication technologies to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS, as illustrated, is to be a "numerical test cell" that enables full engine simulation overnight on cost-effective computing platforms. There are several key elements within NPSS that are required to achieve this capability: 1) clear data interfaces through the development and/or use of data exchange standards, 2) modular and flexible program construction through the use of object-oriented programming, 3) integrated multiple fidelity analysis (zooming) techniques that capture the appropriate physics at the appropriate fidelity for the engine systems, 4) multidisciplinary coupling techniques and finally 5) high performance parallel and distributed computing. The current state of development in these five area focuses on air breathing gas turbine engines and is reported in this paper. However, many of the technologies are generic and can be readily applied to rocket based systems and combined cycles currently being considered for low-cost access-to-space applications. Recent accomplishments include: (1) the development of an industry-standard engine cycle analysis program and plug 'n play architecture, called NPSS Version 1, (2) A full engine simulation that combines a 3D low-pressure subsystem with a 0D high pressure core simulation. This demonstrates the ability to integrate analyses at different levels of detail and to aerodynamically couple components, the fan/booster and low-pressure turbine, through a 3D computational fluid dynamics simulation. (3) Simulation of all of the turbomachinery in a modern turbofan engine on parallel computing platform for rapid and cost-effective execution. This capability can also be used to generate full compressor map, requiring both design and off-design simulation. (4) Three levels of coupling characterize the multidisciplinary analysis under NPSS: loosely coupled, process coupled and tightly coupled. The loosely coupled and process coupled approaches require a common geometry definition to link CAD to analysis tools. The tightly coupled approach is currently validating the use of arbitrary Lagrangian/Eulerian formulation for rotating turbomachinery. The validation includes both centrifugal and axial compression systems. The results of the validation will be reported in the paper. (5) The demonstration of significant computing cost/performance reduction for turbine engine applications using PC clusters. The NPSS Project is supported under the NASA High Performance Computing and Communications Program.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
ANOPP/VMS HSCT ground contour system
NASA Technical Reports Server (NTRS)
Rawls, John, Jr.; Glaab, Lou
1992-01-01
This viewgraph shows the integration of the Visual Motion Simulator with ANOPP. ANOPP is an acronym for the Aircraft NOise Prediction Program. It is a computer code consisting of dedicated noise prediction modules for jet, propeller, and rotor powered aircraft along with flight support and noise propagation modules, all executed under the control of an executive system. The Visual Motion Simulator (VMS) is a ground based motion simulator with six degrees of freedom. The transport-type cockpit is equipped with conventional flight and engine-thrust controls and with flight instrument displays. Control forces on the wheel, column, and rudder pedals are provided by a hydraulic system coupled with an analog computer. The simulator provides variable-feel characteristics of stiffness, damping, coulomb friction, breakout forces, and inertia. The VMS provides a wide range of realistic flight trajectories necessary for computing accurate ground contours. The NASA VMS will be discussed in detail later in this presentation. An equally important part of the system for both ANOPP and VMS is the engine performance. This will also be discussed in the presentation.
Review of Airport Ground Traffic Models Including an Evaluation of the ASTS Computer Program
DOT National Transportation Integrated Search
1972-12-01
The report covers an evaluation of Airport Ground Traffic models for the purpose of simulating an Autonomous Local Intersection Controller. All known models were reviewed and a detailed study was performed on the two in-house models the ASTS and ROSS...
A Limited-Vocabulary, Multi-Speaker Automatic Isolated Word Recognition System.
ERIC Educational Resources Information Center
Paul, James E., Jr.
Techniques for automatic recognition of isolated words are investigated, and a computer simulation of a word recognition system is effected. Considered in detail are data acquisition and digitizing, word detection, amplitude and time normalization, short-time spectral estimation including spectral windowing, spectral envelope approximation,…
Simulation of the communication system between an AUV group and a surface station
NASA Astrophysics Data System (ADS)
Burtovaya, D.; Demin, A.; Demeshko, M.; Moiseev, A.; Kudryashova, A.
2017-01-01
An object model for simulation of the communications system of an autonomous underwater vehicles (AUV) group with a surface station is proposed in the paper. Implementation of the model is made on the basis of the software package “Object Distribution Simulation”. All structural relationships and behavior details are described. The application was developed on the basis of the proposed model and is now used for computational experiments on the simulation of the communications system between the autonomous underwater vehicles group and a surface station.
Slat Noise Predictions Using Higher-Order Finite-Difference Methods on Overset Grids
NASA Technical Reports Server (NTRS)
Housman, Jeffrey A.; Kiris, Cetin
2016-01-01
Computational aeroacoustic simulations using the structured overset grid approach and higher-order finite difference methods within the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for slat noise predictions. The simulations are part of a collaborative study comparing noise generation mechanisms between a conventional slat and a Krueger leading edge flap. Simulation results are compared with experimental data acquired during an aeroacoustic test in the NASA Langley Quiet Flow Facility. Details of the structured overset grid, numerical discretization, and turbulence model are provided.
Coupling all-atom molecular dynamics simulations of ions in water with Brownian dynamics.
Erban, Radek
2016-02-01
Molecular dynamics (MD) simulations of ions (K + , Na + , Ca 2+ and Cl - ) in aqueous solutions are investigated. Water is described using the SPC/E model. A stochastic coarse-grained description for ion behaviour is presented and parametrized using MD simulations. It is given as a system of coupled stochastic and ordinary differential equations, describing the ion position, velocity and acceleration. The stochastic coarse-grained model provides an intermediate description between all-atom MD simulations and Brownian dynamics (BD) models. It is used to develop a multiscale method which uses all-atom MD simulations in parts of the computational domain and (less detailed) BD simulations in the remainder of the domain.
Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.
Resat, H; Mezei, M
1996-01-01
The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations. Images FIGURE 5 FIGURE 7 PMID:8873992
A computer simulation model to compute the radiation transfer of mountainous regions
NASA Astrophysics Data System (ADS)
Li, Yuguang; Zhao, Feng; Song, Rui
2011-11-01
In mountainous regions, the radiometric signal recorded at the sensor depends on a number of factors such as sun angle, atmospheric conditions, surface cover type, and topography. In this paper, a computer simulation model of radiation transfer is designed and evaluated. This model implements the Monte Carlo ray-tracing techniques and is specifically dedicated to the study of light propagation in mountainous regions. The radiative processes between sun light and the objects within the mountainous region are realized by using forward Monte Carlo ray-tracing methods. The performance of the model is evaluated through detailed comparisons with the well-established 3D computer simulation model: RGM (Radiosity-Graphics combined Model) based on the same scenes and identical spectral parameters, which shows good agreements between these two models' results. By using the newly developed computer model, series of typical mountainous scenes are generated to analyze the physical mechanism of mountainous radiation transfer. The results show that the effects of the adjacent slopes are important for deep valleys and they particularly affect shadowed pixels, and the topographic effect needs to be considered in mountainous terrain before accurate inferences from remotely sensed data can be made.
NASA Technical Reports Server (NTRS)
Haakensen, Erik Edward
1998-01-01
The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.
Rational design of an enzyme mutant for anti-cocaine therapeutics
NASA Astrophysics Data System (ADS)
Zheng, Fang; Zhan, Chang-Guo
2008-09-01
(-)-Cocaine is a widely abused drug and there is no available anti-cocaine therapeutic. The disastrous medical and social consequences of cocaine addiction have made the development of an effective pharmacological treatment a high priority. An ideal anti-cocaine medication would be to accelerate (-)-cocaine metabolism producing biologically inactive metabolites. The main metabolic pathway of cocaine in body is the hydrolysis at its benzoyl ester group. Reviewed in this article is the state-of-the-art computational design of high-activity mutants of human butyrylcholinesterase (BChE) against (-)-cocaine. The computational design of BChE mutants have been based on not only the structure of the enzyme, but also the detailed catalytic mechanisms for BChE-catalyzed hydrolysis of (-)-cocaine and (+)-cocaine. Computational studies of the detailed catalytic mechanisms and the structure-and-mechanism-based computational design have been carried out through the combined use of a variety of state-of-the-art techniques of molecular modeling. By using the computational insights into the catalytic mechanisms, a recently developed unique computational design strategy based on the simulation of the rate-determining transition state has been employed to design high-activity mutants of human BChE for hydrolysis of (-)-cocaine, leading to the exciting discovery of BChE mutants with a considerably improved catalytic efficiency against (-)-cocaine. One of the discovered BChE mutants (i.e., A199S/S287G/A328W/Y332G) has a ˜456-fold improved catalytic efficiency against (-)-cocaine. The encouraging outcome of the computational design and discovery effort demonstrates that the unique computational design approach based on the transition-state simulation is promising for rational enzyme redesign and drug discovery.
Performance Improvement Through Indexing of Turbine Airfoils. Part 2; Numerical Simulation
NASA Technical Reports Server (NTRS)
Griffin, Lisa W.; Huber, Frank W.; Sharma, Om P.
1996-01-01
An experimental/analytical study has been conducted to determine the performance improvements achievable by circumferentially indexing succeeding rows of turbine stator airfoils. A series of tests was conducted to experimentally investigate stator wake clocking effects on the performance of the space shuttle main engine (SSME) alternate turbopump development (ATD) fuel turbine test article (TTA). The results from this study indicate that significant increases in stage efficiency can be attained through application of this airfoil clocking concept. Details of the experiment and its results are documented in part 1 of this paper. In order to gain insight into the mechanisms of the performance improvement, extensive computational fluid dynamics (CFD) simulations were executed. The subject of the present paper is the initial results from the CFD investigation of the configurations and conditions detailed in part 1 of the paper. To characterize the aerodynamic environments in the experimental test series, two-dimensional (2D), time accurate, multistage, viscous analyses were performed at the TTA midspan. Computational analyses for five different circumferential positions of the first stage stator have been completed. Details of the computational procedure and the results are presented. The analytical results verify the experimentally demonstrated performance improvement and are compared with data whenever possible. Predictions of time-averaged turbine efficiencies as well as gas conditions throughout the flow field are presented. An initial understanding of the turbine performance improvement mechanism based on the results from this investigation is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strout, Michelle
Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programsmore » through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.« less
Runtime visualization of the human arterial tree.
Insley, Joseph A; Papka, Michael E; Dong, Suchuan; Karniadakis, George; Karonis, Nicholas T
2007-01-01
Large-scale simulation codes typically execute for extended periods of time and often on distributed computational resources. Because these simulations can run for hours, or even days, scientists like to get feedback about the state of the computation and the validity of its results as it runs. It is also important that these capabilities be made available with little impact on the performance and stability of the simulation. Visualizing and exploring data in the early stages of the simulation can help scientists identify problems early, potentially avoiding a situation where a simulation runs for several days, only to discover that an error with an input parameter caused both time and resources to be wasted. We describe an application that aids in the monitoring and analysis of a simulation of the human arterial tree. The application provides researchers with high-level feedback about the state of the ongoing simulation and enables them to investigate particular areas of interest in greater detail. The application also offers monitoring information about the amount of data produced and data transfer performance among the various components of the application.
NASA Astrophysics Data System (ADS)
Zhuo, Congshan; Zhong, Chengwen
2016-11-01
In this paper, a three-dimensional filter-matrix lattice Boltzmann (FMLB) model based on large eddy simulation (LES) was verified for simulating wall-bounded turbulent flows. The Vreman subgrid-scale model was employed in the present FMLB-LES framework, which had been proved to be capable of predicting turbulent near-wall region accurately. The fully developed turbulent channel flows were performed at a friction Reynolds number Reτ of 180. The turbulence statistics computed from the present FMLB-LES simulations, including mean stream velocity profile, Reynolds stress profile and root-mean-square velocity fluctuations greed well with the LES results of multiple-relaxation-time (MRT) LB model, and some discrepancies in comparison with those direct numerical simulation (DNS) data of Kim et al. was also observed due to the relatively low grid resolution. Moreover, to investigate the influence of grid resolution on the present LES simulation, a DNS simulation on a finer gird was also implemented by present FMLB-D3Q19 model. Comparisons of detailed computed various turbulence statistics with available benchmark data of DNS showed quite well agreement.
Dynamic computer simulations of electrophoresis: three decades of active research.
Thormann, Wolfgang; Caslavska, Jitka; Breadmore, Michael C; Mosher, Richard A
2009-06-01
Dynamic models for electrophoresis are based upon model equations derived from the transport concepts in solution together with user-inputted conditions. They are able to predict theoretically the movement of ions and are as such the most versatile tool to explore the fundamentals of electrokinetic separations. Since its inception three decades ago, the state of dynamic computer simulation software and its use has progressed significantly and Electrophoresis played a pivotal role in that endeavor as a large proportion of the fundamental and application papers were published in this periodical. Software is available that simulates all basic electrophoretic systems, including moving boundary electrophoresis, zone electrophoresis, ITP, IEF and EKC, and their combinations under almost exactly the same conditions used in the laboratory. This has been employed to show the detailed mechanisms of many of the fundamental phenomena that occur in electrophoretic separations. Dynamic electrophoretic simulations are relevant for separations on any scale and instrumental format, including free-fluid preparative, gel, capillary and chip electrophoresis. This review includes a historical overview, a survey of current simulators, simulation examples and a discussion of the applications and achievements of dynamic simulation.
Flexible Inhibitor Fluid-Structure Interaction Simulation in RSRM.
NASA Astrophysics Data System (ADS)
Wasistho, Bono
2005-11-01
We employ our tightly coupled fluid/structure/combustion simulation code 'Rocstar-3' for solid propellant rocket motors to study 3D flows past rigid and flexible inhibitors in the Reusable Solid Rocket Motor (RSRM). We perform high resolution simulations of a section of the rocket near the center joint slot at 100 seconds after ignition, using inflow conditions based on less detailed 3D simulations of the full RSRM. Our simulations include both inviscid and turbulent flows (using LES dynamic subgrid-scale model), and explore the interaction between the inhibitor and the resulting fluid flow. The response of the solid components is computed by an implicit finite element solver. The internal mesh motion scheme in our block-structured fluid solver enables our code to handle significant changes in geometry. We compute turbulent statistics and determine the compound instabilities originated from the natural hydrodynamic instabilities and the inhibitor motion. The ultimate goal is to studdy the effect of inhibitor flexing on the turbulent field.
Extravehicular mobility unit thermal simulator
NASA Technical Reports Server (NTRS)
Hixon, C. W.; Phillips, M. A.
1973-01-01
The analytical methods, thermal model, and user's instructions for the SIM bay extravehicular mobility unit (EMU) routine are presented. This digital computer program was developed for detailed thermal performance predictions of the crewman performing a command module extravehicular activity during transearth coast. It accounts for conductive, convective, and radiative heat transfer as well as fluid flow and associated flow control components. The program is a derivative of the Apollo lunar surface EMU digital simulator. It has the operational flexibility to accept card or magnetic tape for both the input data and program logic. Output can be tabular and/or plotted and the mission simulation can be stopped and restarted at the discretion of the user. The program was developed for the NASA-JSC Univac 1108 computer system and several of the capabilities represent utilization of unique features of that system. Analytical methods used in the computer routine are based on finite difference approximations to differential heat and mass balance equations which account for temperature or time dependent thermo-physical properties.
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y.
2016-03-30
In this study, we compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting UV and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are notmore » fully sufficient. While the discrepancies with the exiting data are marginal, the future JWST data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y.
In this study, we compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting UV and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are notmore » fully sufficient. While the discrepancies with the exiting data are marginal, the future JWST data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
Overview of ICE Project: Integration of Computational Fluid Dynamics and Experiments
NASA Technical Reports Server (NTRS)
Stegeman, James D.; Blech, Richard A.; Babrauckas, Theresa L.; Jones, William H.
2001-01-01
Researchers at the NASA Glenn Research Center have developed a prototype integrated environment for interactively exploring, analyzing, and validating information from computational fluid dynamics (CFD) computations and experiments. The Integrated CFD and Experiments (ICE) project is a first attempt at providing a researcher with a common user interface for control, manipulation, analysis, and data storage for both experiments and simulation. ICE can be used as a live, on-tine system that displays and archives data as they are gathered; as a postprocessing system for dataset manipulation and analysis; and as a control interface or "steering mechanism" for simulation codes while visualizing the results. Although the full capabilities of ICE have not been completely demonstrated, this report documents the current system. Various applications of ICE are discussed: a low-speed compressor, a supersonic inlet, real-time data visualization, and a parallel-processing simulation code interface. A detailed data model for the compressor application is included in the appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Michael K.; Davidson, Megan
As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Hendrickson, Bruce
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.« less
Simulation of turbulent separated flows using a novel, evolution-based, eddy-viscosity formulation
NASA Astrophysics Data System (ADS)
Castellucci, Paul
Currently, there exists a lack of confidence in the computational simulation of turbulent separated flows at large Reynolds numbers. The most accurate methods available are too computationally costly to use in engineering applications. Thus, inexpensive models, developed using the Reynolds-averaged Navier-Stokes (RANS) equations, are often extended beyond their applicability. Although these methods will often reproduce integrated quantities within engineering tolerances, such metrics are often insensitive to details within a separated wake, and therefore, poor indicators of simulation fidelity. Using concepts borrowed from large-eddy simulation (LES), a two-equation RANS model is modified to simulate the turbulent wake behind a circular cylinder. This modification involves the computation of one additional scalar field, adding very little to the overall computational cost. When properly inserted into the baseline RANS model, this modification mimics LES in the separated wake, yet reverts to the unmodified form at the cylinder surface. In this manner, superior predictive capability may be achieved without the additional cost of fine spatial resolution associated with LES near solid boundaries. Simulations using modified and baseline RANS models are benchmarked against both LES and experimental data for a circular cylinder wake at Reynolds number 3900. In addition, the computational tool used in this investigation is subject to verification via the Method of Manufactured Solutions. Post-processing of the resultant flow fields includes both mean value and triple-decomposition analysis. These results reveal substantial improvements using the modified system and appear to drive the baseline wake solution toward that of LES, as intended.
Toward Theory-Based Instruction in Scientific Problem Solving.
ERIC Educational Resources Information Center
Heller, Joan I.; And Others
Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…
Computerized technique for recording board defect data
R. Bruce Anderson; R. Edward Thomas; Charles J. Gatchell; Neal D. Bennett; Neal D. Bennett
1993-01-01
A computerized technique for recording board defect data has been developed that is faster and more accurate than manual techniques. The lumber database generated by this technique is a necessary input to computer simulation models that estimate potential cutting yields from various lumber breakdown sequences. The technique allows collection of detailed information...
High performance computing applications in neurobiological research
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.
1994-01-01
The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.
HEP - A semaphore-synchronized multiprocessor with central control. [Heterogeneous Element Processor
NASA Technical Reports Server (NTRS)
Gilliland, M. C.; Smith, B. J.; Calvert, W.
1976-01-01
The paper describes the design concept of the Heterogeneous Element Processor (HEP), a system tailored to the special needs of scientific simulation. In order to achieve high-speed computation required by simulation, HEP features a hierarchy of processes executing in parallel on a number of processors, with synchronization being largely accomplished by hardware. A full-empty-reserve scheme of synchronization is realized by zero-one-valued hardware semaphores. A typical system has, besides the control computer and the scheduler, an algebraic module, a memory module, a first-in first-out (FIFO) module, an integrator module, and an I/O module. The architecture of the scheduler and the algebraic module is examined in detail.
Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.
2001-01-01
A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.
Engineering studies related to Skylab program. [assessment of automatic gain control data
NASA Technical Reports Server (NTRS)
Hayne, G. S.
1973-01-01
The relationship between the S-193 Automatic Gain Control data and the magnitude of received signal power was studied in order to characterize performance parameters for Skylab equipment. The r-factor was used for the assessment and is defined to be less than unity, and a function of off-nadir angle, ocean surface roughness, and receiver signal to noise ratio. A digital computer simulation was also used to assess to additive receiver, or white noise. The system model for the digital simulation is described, along with intermediate frequency and video impulse response functions used, details of the input waveforms, and results to date. Specific discussion of the digital computer programs used is also provided.
Computational modeling of carbohydrate recognition in protein complex
NASA Astrophysics Data System (ADS)
Ishida, Toyokazu
2017-11-01
To understand the mechanistic principle of carbohydrate recognition in proteins, we propose a systematic computational modeling strategy to identify complex carbohydrate chain onto the reduced 2D free energy surface (2D-FES), determined by MD sampling combined with QM/MM energy corrections. In this article, we first report a detailed atomistic simulation study of the norovirus capsid proteins with carbohydrate antigens based on ab initio QM/MM combined with MD-FEP simulations. The present result clearly shows that the binding geometries of complex carbohydrate antigen are determined not by one single, rigid carbohydrate structure, but rather by the sum of averaged conformations mapped onto the minimum free energy region of QM/MM 2D-FES.
NASA Astrophysics Data System (ADS)
Peruchena, Carlos M. Fernández; García-Barberena, Javier; Guisado, María Vicenta; Gastón, Martín
2016-05-01
The design of Concentrating Solar Thermal Power (CSTP) systems requires a detailed knowledge of the dynamic behavior of the meteorology at the site of interest. Meteorological series are often condensed into one representative year with the aim of data volume reduction and speeding-up of energy system simulations, defined as Typical Meteorological Year (TMY). This approach seems to be appropriate for rather detailed simulations of a specific plant; however, in previous stages of the design of a power plant, especially during the optimization of the large number of plant parameters before a final design is reached, a huge number of simulations are needed. Even with today's technology, the computational effort to simulate solar energy system performance with one year of data at high frequency (as 1-min) may become colossal if a multivariable optimization has to be performed. This work presents a simple and efficient methodology for selecting number of individual days able to represent the electrical production of the plant throughout the complete year. To achieve this objective, a new procedure for determining a reduced set of typical weather data in order to evaluate the long-term performance of a solar energy system is proposed. The proposed methodology is based on cluster analysis and permits to drastically reduce computational effort related to the calculation of a CSTP plant energy yield by simulating a reduced number of days from a high frequency TMY.
MCC level C formulation requirements. Shuttle TAEM guidance and flight control, STS-1 baseline
NASA Technical Reports Server (NTRS)
Carman, G. L.; Montez, M. N.
1980-01-01
The TAEM guidance and body rotational dynamics models required for the MCC simulation of the TAEM mission phase are defined. This simulation begins at the end of the entry phase and terminates at TAEM autoland interface. The logic presented is the required configuration for the first shuttle orbital flight (STS-1). The TAEM guidance is simulated in detail. The rotational dynamics simulation is a simplified model that assumes that the commanded rotational rates can be achieved in the integration interval. Thus, the rotational dynamics simulation is essentially a simulation of the autopilot commanded rates and integration of these rates to determine orbiter attitude. The rotational dynamics simulation also includes a simulation of the speedbrake deflection. The body flap and elevon deflections are computed in the orbiter aerodynamic simulation.
Permanent bending and alignment of ZnO nanowires.
Borschel, Christian; Spindler, Susann; Lerose, Damiana; Bochmann, Arne; Christiansen, Silke H; Nietzsche, Sandor; Oertel, Michael; Ronning, Carsten
2011-05-06
Ion beams can be used to permanently bend and re-align nanowires after growth. We have irradiated ZnO nanowires with energetic ions, achieving bending and alignment in different directions. Not only the bending of single nanowires is studied in detail, but also the simultaneous alignment of large ensembles of ZnO nanowires. Computer simulations reveal how the bending is initiated by ion beam induced damage. Detailed structural characterization identifies dislocations to relax stresses and make the bending and alignment permanent, even surviving annealing procedures.
Computable general equilibrium model fiscal year 2013 capability development report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Brian Keith; Rivera, Michael Kelly; Boero, Riccardo
This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences inmore » the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.« less
Third-order accurate conservative method on unstructured meshes for gasdynamic simulations
NASA Astrophysics Data System (ADS)
Shirobokov, D. A.
2017-04-01
A third-order accurate finite-volume method on unstructured meshes is proposed for solving viscous gasdynamic problems. The method is described as applied to the advection equation. The accuracy of the method is verified by computing the evolution of a vortex on meshes of various degrees of detail with variously shaped cells. Additionally, unsteady flows around a cylinder and a symmetric airfoil are computed. The numerical results are presented in the form of plots and tables.
2010-07-22
dependent , providing a natural bandwidth match between compute cores and the memory subsystem. • High Bandwidth Dcnsity. Waveguides crossing the chip...simulate this memory access architecture on a 2S6-core chip with a concentrated 64-node network lIsing detailed traces of high-performance embedded...memory modulcs, wc placc memory access poi nts (MAPs) around the pcriphery of the chip connected to thc nctwork. These MAPs, shown in Figure 4, contain
Argonne simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-04-01
A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distributed (networked) computer systems; however, a version for a stand alone workstation is also available. The ITS simulator includes an Expert Driver Model (EDM) of instrumented ``smart`` vehicles with in-vehicle navigation units. The EDM is capable of performing optimal route planning and communicating with Traffic Management Centers (TMC). A dynamic road map data base is sued for optimum route planning, where the data is updated periodically tomore » reflect any changes in road or weather conditions. The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces that includes human-factors studies to support safety and operational research. Realistic modeling of variations of the posted driving speed are based on human factor studies that take into consideration weather, road conditions, driver`s personality and behavior and vehicle type. The simulator has been developed on a distributed system of networked UNIX computers, but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of the developed simulator is that vehicles will be represented by autonomous computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. Vehicle processes interact with each other and with ITS components by exchanging messages. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.« less
Sirry, Mazin S.; Davies, Neil H.; Kadner, Karen; Dubuis, Laura; Saleh, Muhammad G.; Meintjes, Ernesta M.; Spottiswoode, Bruce S.; Zilla, Peter; Franz, Thomas
2013-01-01
Biomaterial injection based therapies have showed cautious success in restoration of cardiac function and prevention of adverse remodelling into heart failure after myocardial infarction (MI). However, the underlying mechanisms are not well understood. Computational studies utilised simplified representations of the therapeutic myocardial injectates. Wistar rats underwent experimental infarction followed by immediate injection of polyethylene glycol hydrogel in the infarct region. Hearts were explanted, cryo-sectioned and the region with the injectate histologically analysed. Histological micrographs were used to reconstruct the dispersed hydrogel injectate. Cardiac magnetic resonance imaging (CMRI) data from a healthy rat were used to obtain an end-diastolic biventricular geometry which was subsequently adjusted and combined with the injectate model. The computational geometry of the injectate exhibited microscopic structural details found the in situ. The combination of injectate and cardiac geometry provides realistic geometries for multiscale computational studies of intra-myocardial injectate therapies for the rat model that has been widely used for MI research. PMID:23682845
Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck
2010-01-01
Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138
Modeling Physiological Systems in the Human Body as Networks of Quasi-1D Fluid Flows
NASA Astrophysics Data System (ADS)
Staples, Anne
2008-11-01
Extensive research has been done on modeling human physiology. Most of this work has been aimed at developing detailed, three-dimensional models of specific components of physiological systems, such as a cell, a vein, a molecule, or a heart valve. While efforts such as these are invaluable to our understanding of human biology, if we were to construct a global model of human physiology with this level of detail, computing even a nanosecond in this computational being's life would certainly be prohibitively expensive. With this in mind, we derive the Pulsed Flow Equations, a set of coupled one-dimensional partial differential equations, specifically designed to capture two-dimensional viscous, transport, and other effects, and aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. Our goal is to be able to perform faster-than-real time simulations of global processes in the human body on desktop computers.
NASA Technical Reports Server (NTRS)
Ash, A. G.
1985-01-01
This paper contains details of computer models of shower development which have been used to investigate the experimental data on shower cores observed in the Leeds 35 sq m and Sacramento Peak (New Mexico) 20 sq m arrays of current limited spark (discharge) chambers. The simulations include predictions for primaries ranging from protons to iron nuclei (with heavy nuclei treated using both superposition and fragmentation models).
Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite
Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai
2013-04-01
The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.
The flight robotics laboratory
NASA Technical Reports Server (NTRS)
Tobbe, Patrick A.; Williamson, Marlin J.; Glaese, John R.
1988-01-01
The Flight Robotics Laboratory of the Marshall Space Flight Center is described in detail. This facility, containing an eight degree of freedom manipulator, precision air bearing floor, teleoperated motion base, reconfigurable operator's console, and VAX 11/750 computer system, provides simulation capability to study human/system interactions of remote systems. The facility hardware, software and subsequent integration of these components into a real time man-in-the-loop simulation for the evaluation of spacecraft contact proximity and dynamics are described.
Development and testing of a fast conceptual river water quality model.
Keupers, Ingrid; Willems, Patrick
2017-04-15
Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Atomic Detail Visualization of Photosynthetic Membranes with GPU-Accelerated Ray Tracing
Vandivort, Kirby L.; Barragan, Angela; Singharoy, Abhishek; Teo, Ivan; Ribeiro, João V.; Isralewitz, Barry; Liu, Bo; Goh, Boon Chong; Phillips, James C.; MacGregor-Chatwin, Craig; Johnson, Matthew P.; Kourkoutis, Lena F.; Hunter, C. Neil
2016-01-01
The cellular process responsible for providing energy for most life on Earth, namely photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers. PMID:27274603
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
Numerical investigation of turbulent channel flow
NASA Technical Reports Server (NTRS)
Moin, P.; Kim, J.
1981-01-01
Fully developed turbulent channel flow was simulated numerically at Reynolds number 13800, based on centerline velocity and channel halt width. The large-scale flow field was obtained by directly integrating the filtered, three dimensional, time dependent, Navier-Stokes equations. The small-scale field motions were simulated through an eddy viscosity model. The calculations were carried out on the ILLIAC IV computer with up to 516,096 grid points. The computed flow field was used to study the statistical properties of the flow as well as its time dependent features. The agreement of the computed mean velocity profile, turbulence statistics, and detailed flow structures with experimental data is good. The resolvable portion of the statistical correlations appearing in the Reynolds stress equations are calculated. Particular attention is given to the examination of the flow structure in the vicinity of the wall.
Systems Biology in Immunology – A Computational Modeling Perspective
Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra; Fraser, Iain D. C.
2011-01-01
Systems biology is an emerging discipline that combines high-content, multiplexed measurements with informatic and computational modeling methods to better understand biological function at various scales. Here we present a detailed review of the methods used to create computational models and conduct simulations of immune function, We provide descriptions of the key data gathering techniques employed to generate the quantitative and qualitative data required for such modeling and simulation and summarize the progress to date in applying these tools and techniques to questions of immunological interest, including infectious disease. We include comments on what insights modeling can provide that complement information obtained from the more familiar experimental discovery methods used by most investigators and why quantitative methods are needed to eventually produce a better understanding of immune system operation in health and disease. PMID:21219182
Wang, Qing; Xue, Tuo; Song, Chunnian; Wang, Yan; Chen, Guangju
2016-01-01
Free energy calculations of the potential of mean force (PMF) based on the combination of targeted molecular dynamics (TMD) simulations and umbrella samplings as a function of physical coordinates have been applied to explore the detailed pathways and the corresponding free energy profiles for the conformational transition processes of the butane molecule and the 35-residue villin headpiece subdomain (HP35). The accurate PMF profiles for describing the dihedral rotation of butane under both coordinates of dihedral rotation and root mean square deviation (RMSD) variation were obtained based on the different umbrella samplings from the same TMD simulations. The initial structures for the umbrella samplings can be conveniently selected from the TMD trajectories. For the application of this computational method in the unfolding process of the HP35 protein, the PMF calculation along with the coordinate of the radius of gyration (Rg) presents the gradual increase of free energies by about 1 kcal/mol with the energy fluctuations. The feature of conformational transition for the unfolding process of the HP35 protein shows that the spherical structure extends and the middle α-helix unfolds firstly, followed by the unfolding of other α-helices. The computational method for the PMF calculations based on the combination of TMD simulations and umbrella samplings provided a valuable strategy in investigating detailed conformational transition pathways for other allosteric processes. PMID:27171075
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Cornick, D. E.; Stevenson, R.
1977-01-01
The capabilities and applications of the three-degree-of-freedom (3DOF) version and the six-degree-of-freedom (6DOF) version of the Program to Optimize Simulated Trajectories (POST) are summarized. The document supplements the detailed program manuals by providing additional information that motivates and clarifies basic capabilities, input procedures, applications and computer requirements of these programs. The information will enable prospective users to evaluate the programs, and to determine if they are applicable to their problems. Enough information is given to enable managerial personnel to evaluate the capabilities of the programs and describes the POST structure, formulation, input and output procedures, sample cases, and computer requirements. The report also provides answers to basic questions concerning planet and vehicle modeling, simulation accuracy, optimization capabilities, and general input rules. Several sample cases are presented.
An Experimental and Numerical Study of a Supersonic Burner for CFD Model Development
NASA Technical Reports Server (NTRS)
Magnotti, G.; Cutler, A. D.
2008-01-01
A laboratory scale supersonic burner has been developed for validation of computational fluid dynamics models. Detailed numerical simulations were performed for the flow inside the combustor, and coupled with finite element thermal analysis to obtain more accurate outflow conditions. A database of nozzle exit profiles for a wide range of conditions of interest was generated to be used as boundary conditions for simulation of the external jet, or for validation of non-intrusive measurement techniques. A set of experiments was performed to validate the numerical results. In particular, temperature measurements obtained by using an infrared camera show that the computed heat transfer was larger than the measured value. Relaminarization in the convergent part of the nozzle was found to be responsible for this discrepancy, and further numerical simulations sustained this conclusion.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelley, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.
1986-01-01
The purpose of the Robotics Simulation Program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotics systems. ROBSIM is program in FORTRAN 77 for use on a VAX 11/750 computer under the VMS operating system. This user's guide describes the capabilities of the ROBSIM programs, including the system definition function, the analysis tools function and the postprocessor function. The options a user may encounter with each of these executables are explained in detail and the different program prompts appearing to the user are included. Some useful suggestions concerning the appropriate answers to be given by the user are provided. An example user interactive run in enclosed for each of the main program services, and some of the capabilities are illustrated.
Accelerating Sequential Gaussian Simulation with a constant path
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus
2018-03-01
Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.
Detailed Multidimensional Simulations of the Structure and Dynamics of Flames
NASA Technical Reports Server (NTRS)
Patnaik, G.; Kailasanath, K.
1999-01-01
Numerical simulations in which the various physical and chemical processes can be independently controlled can significantly advance our understanding of the structure, stability, dynamics and extinction of flames. Therefore, our approach has been to use detailed time-dependent, multidimensional, multispecies numerical models to perform carefully designed computational experiments of flames on Earth and in microgravity environments. Some of these computational experiments are complementary to physical experiments performed under the Microgravity Program while others provide a fundamental understanding that cannot be obtained from physical experiments alone. In this report, we provide a brief summary of our recent research highlighting the contributions since the previous microgravity combustion workshop. There are a number of mechanisms that can cause flame instabilities and result in the formation of dynamic multidimensional structures. In the past, we have used numerical simulations to show that it is the thermo-diffusive instability rather than an instability due to preferential diffusion that is the dominant mechanism for the formation of cellular flames in lean hydrogen-air mixtures. Other studies have explored the role of gravity on flame dynamics and extinguishment, multi-step kinetics and radiative losses on flame instabilities in rich hydrogen-air flames, and heat losses on burner-stabilized flames in microgravity. The recent emphasis of our work has been on exploring flame-vortex interactions and further investigating the structure and dynamics of lean hydrogen-air flames in microgravity. These topics are briefly discussed after a brief discussion of our computational approach for solving these problems.
Current Lewis Turbomachinery Research: Building on our Legacy of Excellence
NASA Technical Reports Server (NTRS)
Povinelli, Louis A.
1997-01-01
This Wu Chang-Hua lecture is concerned with the development of analysis and computational capability for turbomachinery flows which is based on detailed flow field physics. A brief review of the work of Professor Wu is presented as well as a summary of the current NASA aeropropulsion programs. Two major areas of research are described in order to determine our predictive capabilities using modern day computational tools evolved from the work of Professor Wu. In one of these areas, namely transonic rotor flow, it is demonstrated that a high level of accuracy is obtainable provided sufficient geometric detail is simulated. In the second case, namely turbine heat transfer, our capability is lacking for rotating blade rows and experimental correlations will provide needed information in the near term. It is believed that continuing progress will allow us to realize the full computational potential and its impact on design time and cost.
Experimental, Theoretical, and Computational Investigation of Separated Nozzle Flows
NASA Technical Reports Server (NTRS)
Hunter, Craig A.
2004-01-01
A detailed experimental, theoretical, and computational study of separated nozzle flows has been conducted. Experimental testing was performed at the NASA Langley 16-Foot Transonic Tunnel Complex. As part of a comprehensive static performance investigation, force, moment, and pressure measurements were made and schlieren flow visualization was obtained for a sub-scale, non-axisymmetric, two-dimensional, convergent- divergent nozzle. In addition, two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and algebraic Reynolds stress modeling. For reference, experimental and computational results were compared with theoretical predictions based on one-dimensional gas dynamics and an approximate integral momentum boundary layer method. Experimental results from this study indicate that off-design overexpanded nozzle flow was dominated by shock induced boundary layer separation, which was divided into two distinct flow regimes; three- dimensional separation with partial reattachment, and fully detached two-dimensional separation. The test nozzle was observed to go through a marked transition in passing from one regime to the other. In all cases, separation provided a significant increase in static thrust efficiency compared to the ideal prediction. Results indicate that with controlled separation, the entire overexpanded range of nozzle performance would be within 10% of the peak thrust efficiency. By offering savings in weight and complexity over a conventional mechanical exhaust system, this may allow a fixed geometry nozzle to cover an entire flight envelope. The computational simulation was in excellent agreement with experimental data over most of the test range, and did a good job of modeling internal flow and thrust performance. An exception occurred at low nozzle pressure ratios, where the two-dimensional computational model was inconsistent with the three-dimensional separation observed in the experiment. In general, the computation captured the physics of the shock boundary layer interaction and shock induced boundary layer separation in the nozzle, though there were some differences in shock structure compared to experiment. Though minor, these differences could be important for studies involving flow control or thrust vectoring of separated nozzles. Combined with other observations, this indicates that more detailed, three-dimensional computational modeling needs to be conducted to more realistically simulate shock-separated nozzle flows.
High-fidelity simulations of blast loadings in urban environments using an overset meshing strategy
NASA Astrophysics Data System (ADS)
Wang, X.; Remotigue, M.; Arnoldus, Q.; Janus, M.; Luke, E.; Thompson, D.; Weed, R.; Bessette, G.
2017-05-01
Detailed blast propagation and evolution through multiple structures representing an urban environment were simulated using the code Loci/BLAST, which employs an overset meshing strategy. The use of overset meshes simplifies mesh generation by allowing meshes for individual component geometries to be generated independently. Detailed blast propagation and evolution through multiple structures, wave reflection and interaction between structures, and blast loadings on structures were simulated and analyzed. Predicted results showed good agreement with experimental data generated by the US Army Engineer Research and Development Center. Loci/BLAST results were also found to compare favorably to simulations obtained using the Second-Order Hydrodynamic Automatic Mesh Refinement Code (SHAMRC). The results obtained demonstrated that blast reflections in an urban setting significantly increased the blast loads on adjacent buildings. Correlations of computational results with experimental data yielded valuable insights into the physics of blast propagation, reflection, and interaction under an urban setting and verified the use of Loci/BLAST as a viable tool for urban blast analysis.
Thermo-Mechanical Analyses of Dynamically Loaded Rubber Cylinders
NASA Technical Reports Server (NTRS)
Johnson, Arthur R.; Chen, Tzi-Kang
2002-01-01
Thick rubber components are employed by the Army to carry large loads. In tanks, rubber covers road wheels and track systems to protect roadways. It is difficult for design engineers to simulate the details of the hysteretic heating for large strain viscoelastic deformations. In this study, an approximation to the viscoelastic energy dissipated per unit time is investigated for use in estimating mechanically induced viscoelastic heating. Coupled thermo-mechanical simulations of large cyclic deformations of rubber cylinders are presented. The cylinders are first compressed axially and then cyclically loaded about the compressed state. Details of the algorithm and some computational issues are discussed. The coupled analyses are conducted for tall and short rubber cylinders both with and without imbedded metal disks.
Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.; ...
2012-01-01
A template-based generic programming approach was presented in Part I of this series of papers [Sci. Program. 20 (2012), 197–219] that separates the development effort of programming a physical model from that of computing additional quantities, such as derivatives, needed for embedded analysis algorithms. In this paper, we describe the implementation details for using the template-based generic programming approach for simulation and analysis of partial differential equations (PDEs). We detail several of the hurdles that we have encountered, and some of the software infrastructure developed to overcome them. We end with a demonstration where we present shape optimization and uncertaintymore » quantification results for a 3D PDE application.« less
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.
1990-01-01
This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
Vierheller, Janine; Neubert, Wilhelm; Falcke, Martin; Gilbert, Stephen H.; Chamakuri, Nagaiah
2015-01-01
Mathematical modeling of excitation-contraction coupling (ECC) in ventricular cardiac myocytes is a multiscale problem, and it is therefore difficult to develop spatially detailed simulation tools. ECC involves gradients on the length scale of 100 nm in dyadic spaces and concentration profiles along the 100 μm of the whole cell, as well as the sub-millisecond time scale of local concentration changes and the change of lumenal Ca2+ content within tens of seconds. Our concept for a multiscale mathematical model of Ca2+ -induced Ca2+ release (CICR) and whole cardiomyocyte electrophysiology incorporates stochastic simulation of individual LC- and RyR-channels, spatially detailed concentration dynamics in dyadic clefts, rabbit membrane potential dynamics, and a system of partial differential equations for myoplasmic and lumenal free Ca2+ and Ca2+-binding molecules in the bulk of the cell. We developed a novel computational approach to resolve the concentration gradients from dyadic space to cell level by using a quasistatic approximation within the dyad and finite element methods for integrating the partial differential equations. We show whole cell Ca2+-concentration profiles using three previously published RyR-channel Markov schemes. PMID:26441674
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
NASA Astrophysics Data System (ADS)
Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam
2016-12-01
Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.
Computational Physics' Greatest Hits
NASA Astrophysics Data System (ADS)
Bug, Amy
2011-03-01
The digital computer, has worked its way so effectively into our profession that now, roughly 65 years after its invention, it is virtually impossible to find a field of experimental or theoretical physics unaided by computational innovation. It is tough to think of another device about which one can make that claim. In the session ``What is computational physics?'' speakers will distinguish computation within the field of computational physics from this ubiquitous importance across all subfields of physics. This talk will recap the invited session ``Great Advances...Past, Present and Future'' in which five dramatic areas of discovery (five of our ``greatest hits'') are chronicled: The physics of many-boson systems via Path Integral Monte Carlo, the thermodynamic behavior of a huge number of diverse systems via Monte Carlo Methods, the discovery of new pharmaceutical agents via molecular dynamics, predictive simulations of global climate change via detailed, cross-disciplinary earth system models, and an understanding of the formation of the first structures in our universe via galaxy formation simulations. The talk will also identify ``greatest hits'' in our field from the teaching and research perspectives of other members of DCOMP, including its Executive Committee.
NASA Technical Reports Server (NTRS)
Wright, Jeffrey; Thakur, Siddharth
2006-01-01
Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
2018-03-19
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
NASA Astrophysics Data System (ADS)
Llovet, X.; Salvat, F.
2018-01-01
The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.
Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code
NASA Astrophysics Data System (ADS)
Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.
2015-08-01
MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.
Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.
2002-01-01
Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
Development of Comprehensive Reduced Kinetic Models for Supersonic Reacting Shear Layer Simulations
NASA Technical Reports Server (NTRS)
Zambon, A. C.; Chelliah, H. K.; Drummond, J. P.
2006-01-01
Large-scale simulations of multi-dimensional unsteady turbulent reacting flows with detailed chemistry and transport can be computationally extremely intensive even on distributed computing architectures. With the development of suitable reduced chemical kinetic models, the number of scalar variables to be integrated can be decreased, leading to a significant reduction in the computational time required for the simulation with limited loss of accuracy in the results. A general MATLAB-based automated mechanism reduction procedure is presented to reduce any complex starting mechanism (detailed or skeletal) with minimal human intervention. Based on the application of the quasi steady-state (QSS) approximation for certain chemical species and on the elimination of the fast reaction rates in the mechanism, several comprehensive reduced models, capable of handling different fuels such as C2H4, CH4 and H2, have been developed and thoroughly tested for several combustion problems (ignition, propagation and extinction) and physical conditions (reactant compositions, temperatures, and pressures). A key feature of the present reduction procedure is the explicit solution of the concentrations of the QSS species, needed for the evaluation of the elementary reaction rates. In contrast, previous approaches relied on an implicit solution due to the strong coupling between QSS species, requiring computationally expensive inner iterations. A novel algorithm, based on the definition of a QSS species coupling matrix, is presented to (i) introduce appropriate truncations to the QSS algebraic relations and (ii) identify the optimal sequence for the explicit solution of the concentration of the QSS species. With the automatic generation of the relevant source code, the resulting reduced models can be readily implemented into numerical codes.
Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas
2011-12-15
The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.
NASA Astrophysics Data System (ADS)
Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.
2010-03-01
Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.
Construction of the real patient simulator system.
Chan, Richard; Sun, C T
2012-05-01
Simulation for perfusion education has been used for at least the past 25 years. The earlier models were either electronic (computer games) or fluid dynamic models and provided invaluable adjuncts to perfusion training and education. In 2009, the *North Shore-LIJ Health System at Great Neck, New York, opened an innovative "Bioskill Center" dedicated to simulated virtual reality advanced hands-on surgical training as well as perfusion simulation. Professional cardiac surgical organizations now show great interest in using simulation for training and recertification. Simulation will continue to be the direction for future perfusion training and education. This manuscript introduces a cost-effective system developed from discarded perfusion products and it is not intended to detail the actual lengthy process of its construction.
Study to design and develop remote manipulator system. [computer simulation of human performance
NASA Technical Reports Server (NTRS)
Hill, J. W.; Mcgovern, D. E.; Sword, A. J.
1974-01-01
Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance.
A computer tool to support in design of industrial Ethernet.
Lugli, Alexandre Baratella; Santos, Max Mauro Dias; Franco, Lucia Regina Horta Rodrigues
2009-04-01
This paper presents a computer tool to support in the project and development of an industrial Ethernet network, verifying the physical layer (cables-resistance and capacitance, scan time, network power supply-POE's concept "Power Over Ethernet" and wireless), and occupation rate (amount of information transmitted to the network versus the controller network scan time). These functions are accomplished without a single physical element installed in the network, using only simulation. The computer tool has a software that presents a detailed vision of the network to the user, besides showing some possible problems in the network, and having an extremely friendly environment.
Algorithm implementation on the Navier-Stokes computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krist, S.E.; Zang, T.A.
1987-03-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
Algorithm implementation on the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Zang, Thomas A.
1987-01-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
Modeling Effects of RNA on Capsid Assembly Pathways via Coarse-Grained Stochastic Simulation
Smith, Gregory R.; Xie, Lu; Schwartz, Russell
2016-01-01
The environment of a living cell is vastly different from that of an in vitro reaction system, an issue that presents great challenges to the use of in vitro models, or computer simulations based on them, for understanding biochemistry in vivo. Virus capsids make an excellent model system for such questions because they typically have few distinct components, making them amenable to in vitro and modeling studies, yet their assembly can involve complex networks of possible reactions that cannot be resolved in detail by any current experimental technology. We previously fit kinetic simulation parameters to bulk in vitro assembly data to yield a close match between simulated and real data, and then used the simulations to study features of assembly that cannot be monitored experimentally. The present work seeks to project how assembly in these simulations fit to in vitro data would be altered by computationally adding features of the cellular environment to the system, specifically the presence of nucleic acid about which many capsids assemble. The major challenge of such work is computational: simulating fine-scale assembly pathways on the scale and in the parameter domains of real viruses is far too computationally costly to allow for explicit models of nucleic acid interaction. We bypass that limitation by applying analytical models of nucleic acid effects to adjust kinetic rate parameters learned from in vitro data to see how these adjustments, singly or in combination, might affect fine-scale assembly progress. The resulting simulations exhibit surprising behavioral complexity, with distinct effects often acting synergistically to drive efficient assembly and alter pathways relative to the in vitro model. The work demonstrates how computer simulations can help us understand how assembly might differ between the in vitro and in vivo environments and what features of the cellular environment account for these differences. PMID:27244559
Dyadic brain modelling, mirror systems and the ontogenetic ritualization of ape gesture
Arbib, Michael; Ganesh, Varsha; Gasser, Brad
2014-01-01
The paper introduces dyadic brain modelling, offering both a framework for modelling the brains of interacting agents and a general framework for simulating and visualizing the interactions generated when the brains (and the two bodies) are each coded up in computational detail. It models selected neural mechanisms in ape brains supportive of social interactions, including putative mirror neuron systems inspired by macaque neurophysiology but augmented by increased access to proprioceptive state. Simulation results for a reduced version of the model show ritualized gesture emerging from interactions between a simulated child and mother ape. PMID:24778382
Dyadic brain modelling, mirror systems and the ontogenetic ritualization of ape gesture.
Arbib, Michael; Ganesh, Varsha; Gasser, Brad
2014-01-01
The paper introduces dyadic brain modelling, offering both a framework for modelling the brains of interacting agents and a general framework for simulating and visualizing the interactions generated when the brains (and the two bodies) are each coded up in computational detail. It models selected neural mechanisms in ape brains supportive of social interactions, including putative mirror neuron systems inspired by macaque neurophysiology but augmented by increased access to proprioceptive state. Simulation results for a reduced version of the model show ritualized gesture emerging from interactions between a simulated child and mother ape.
Monte Carlo errors with less errors
NASA Astrophysics Data System (ADS)
Wolff, Ulli; Alpha Collaboration
2004-01-01
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
NASA Technical Reports Server (NTRS)
Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)
1993-01-01
A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
2013-12-01
instabilities for different equivalence ratios and fuel injector locations. Comparisons of the computational and experimental results are carried out using...the fuel injector and swirler as the full geometry. The full geometry in Fig. 1 (b) is the same as the one that was used in the experiments. In Fig. 1...combustion instabilities in both the simulations and the experiments. Fuel injector (Detail B) sits in the converging-diverging section connecting the air
NASA Technical Reports Server (NTRS)
Wasynczuk, O.; Krause, P. C.; Biess, J. J.; Kapustka, R.
1990-01-01
A detailed computer simulation was used to illustrate the steady-state and dynamic operating characteristics of a 20-kHz resonant spacecraft power system. The simulated system consists of a parallel-connected set of DC-inductor resonant inverters (drivers), a 440-V cable, a node transformer, a 220-V cable, and a transformer-rectifier-filter (TRF) AC-to-DC receiver load. Also included in the system are a 1-kW 0.8-pf RL load and a double-LC filter connected at the receiving end of the 20-kHz AC system. The detailed computer simulation was used to illustrate the normal steady-state operating characteristics and the dynamic system performance following, for example, TRF startup. It is shown that without any filtering the given system exhibits harmonic resonances due to an interaction between the switching of the source and/or load converters and the AC system. However, the double-LC filter at the receiving-end of the AC system and harmonic traps connected in series with each of the drivers significantly reduce the harmonic distortion of the 20-kHz bus voltage. Significant additional improvement in the waveform quality can be achieved by including a double-LC filter with each driver.
Creation of anatomical models from CT data
NASA Astrophysics Data System (ADS)
Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.
2018-04-01
Computed tomography is a great source of biomedical data because it allows a detailed exploration of complex anatomical structures. Some structures are not visible on CT scans, and some are hard to distinguish due to partial volume effect. CT datasets require preprocessing before using them as anatomical models in a simulation system. The work describes segmentation and data transformation methods for an anatomical model creation from the CT data. The result models may be used for visual and haptic rendering and drilling simulation in a virtual surgery system.
Plane-Wave DFT Methods for Chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J.
A detailed description of modern plane-wave DFT methods and software (contained in the NWChem package) are described that allow for both geometry optimization and ab initio molecular dynamics simulations. Significant emphasis is placed on aspects of these methods that are of interest to computational chemists and useful for simulating chemistry, including techniques for calculating charged systems, exact exchange (i.e. hybrid DFT methods), and highly efficient AIMD/MM methods. Sample applications on the structure of the goethite+water interface and the hydrolysis of nitroaromatic molecules are described.
1979-08-01
frag orders for tactical considerations. Frag orders issued by simulated modules will be " edited " by the same procedure as that used with populated...record and distributed as required. Queries transmitted from any staff module will be reviewed and edited at event time for technical accuracy. If an...of this kind will have to be carefully edited and interpreted by the control- ler(s) and/or computer before the chanqe is instituted in the real world
Large-Eddy Simulations of Dust Devils and Convective Vortices
NASA Astrophysics Data System (ADS)
Spiga, Aymeric; Barth, Erika; Gu, Zhaolin; Hoffmann, Fabian; Ito, Junshi; Jemmett-Smith, Bradley; Klose, Martina; Nishizawa, Seiya; Raasch, Siegfried; Rafkin, Scot; Takemi, Tetsuya; Tyler, Daniel; Wei, Wei
2016-11-01
In this review, we address the use of numerical computations called Large-Eddy Simulations (LES) to study dust devils, and the more general class of atmospheric phenomena they belong to (convective vortices). We describe the main elements of the LES methodology. We review the properties, statistics, and variability of dust devils and convective vortices resolved by LES in both terrestrial and Martian environments. The current challenges faced by modelers using LES for dust devils are also discussed in detail.
ERIC Educational Resources Information Center
Chambers, Jay G.; Parrish, Thomas B.
The Resource Cost Model (RCM) is a resource management system that combines the technical advantages of sophisticated computer simulation software with the practical benefits of group decision making to provide detailed information about educational program costs. The first section of this document introduces the conceptual framework underlying…
LOGSIM user's manual. [Logic Simulation Program for computer aided design of logic circuits
NASA Technical Reports Server (NTRS)
Mitchell, C. L.; Taylor, J. F.
1972-01-01
The user's manual for the LOGSIM Program is presented. All program options are explained and a detailed definition of the format of each input card is given. LOGSIM Program operations, and the preparation of LOGSIM input data are discused along with data card formats, postprocessor data cards, and output interpretation.
Driscoll, Mark; Mac-Thiong, Jean-Marc; Labelle, Hubert; Parent, Stefan
2013-01-01
A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices. PMID:23991426
NASA Astrophysics Data System (ADS)
Babu, C. Rajesh; Kumar, P.; Rajamohan, G.
2017-07-01
Computation of fluid flow and heat transfer in an economizer is simulated by a porous medium approach, with plain tubes having a horizontal in-line arrangement and cross flow arrangement in a coal-fired thermal power plant. The economizer is a thermal mechanical device that captures waste heat from the thermal exhaust flue gasses through heat transfer surfaces to preheat boiler feed water. In order to evaluate the fluid flow and heat transfer on tubes, a numerical analysis on heat transfer performance is carried out on an 110 t/h MCR (Maximum continuous rating) boiler unit. In this study, thermal performance is investigated using the computational fluid dynamics (CFD) simulation using ANSYS FLUENT. The fouling factor ε and the overall heat transfer coefficient ψ are employed to evaluate the fluid flow and heat transfer. The model demands significant computational details for geometric modeling, grid generation, and numerical calculations to evaluate the thermal performance of an economizer. The simulation results show that the overall heat transfer coefficient 37.76 W/(m2K) and economizer coil side pressure drop of 0.2 (kg/cm2) are found to be conformity within the tolerable limits when compared with existing industrial economizer data.
Computer-simulated laboratory explorations for middle school life, earth, and physical Science
NASA Astrophysics Data System (ADS)
von Blum, Ruth
1992-06-01
Explorations in Middle School Science is a set of 72 computer-simulated laboratory lessons in life, earth, and physical Science for grades 6 9 developed by Jostens Learning Corporation with grants from the California State Department of Education and the National Science Foundation.3 At the heart of each lesson is a computer-simulated laboratory that actively involves students in doing science improving their: (1) understanding of science concepts by applying critical thinking to solve real problems; (2) skills in scientific processes and communications; and (3) attitudes about science. Students use on-line tools (notebook, calculator, word processor) to undertake in-depth investigations of phenomena (like motion in outer space, disease transmission, volcanic eruptions, or the structure of the atom) that would be too difficult, dangerous, or outright impossible to do in a “live” laboratory. Suggested extension activities lead students to hands-on investigations, away from the computer. This article presents the underlying rationale, instructional model, and process by which Explorations was designed and developed. It also describes the general courseware structure and three lesson's in detail, as well as presenting preliminary data from the evaluation. Finally, it suggests a model for incorporating technology into the science classroom.
Parallel Tensor Compression for Large-Scale Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less
2011-01-01
Background Computational models play an increasingly important role in the assessment and control of public health crises, as demonstrated during the 2009 H1N1 influenza pandemic. Much research has been done in recent years in the development of sophisticated data-driven models for realistic computer-based simulations of infectious disease spreading. However, only a few computational tools are presently available for assessing scenarios, predicting epidemic evolutions, and managing health emergencies that can benefit a broad audience of users including policy makers and health institutions. Results We present "GLEaMviz", a publicly available software system that simulates the spread of emerging human-to-human infectious diseases across the world. The GLEaMviz tool comprises three components: the client application, the proxy middleware, and the simulation engine. The latter two components constitute the GLEaMviz server. The simulation engine leverages on the Global Epidemic and Mobility (GLEaM) framework, a stochastic computational scheme that integrates worldwide high-resolution demographic and mobility data to simulate disease spread on the global scale. The GLEaMviz design aims at maximizing flexibility in defining the disease compartmental model and configuring the simulation scenario; it allows the user to set a variety of parameters including: compartment-specific features, transition values, and environmental effects. The output is a dynamic map and a corresponding set of charts that quantitatively describe the geo-temporal evolution of the disease. The software is designed as a client-server system. The multi-platform client, which can be installed on the user's local machine, is used to set up simulations that will be executed on the server, thus avoiding specific requirements for large computational capabilities on the user side. Conclusions The user-friendly graphical interface of the GLEaMviz tool, along with its high level of detail and the realism of its embedded modeling approach, opens up the platform to simulate realistic epidemic scenarios. These features make the GLEaMviz computational tool a convenient teaching/training tool as well as a first step toward the development of a computational tool aimed at facilitating the use and exploitation of computational models for the policy making and scenario analysis of infectious disease outbreaks. PMID:21288355
NASA Astrophysics Data System (ADS)
Saghafian, Amirreza; Pitsch, Heinz
2012-11-01
A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.
Trace contaminant control simulation computer program, version 8.1
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various process technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. Included in the simulation are chemical and physical adsorption by activated charcoal, chemical adsorption by lithium hydroxide, absorption by humidity condensate, and low- and high-temperature catalytic oxidation. Means are provided for simulating regenerable as well as nonregenerable systems. The program provides an overall mass balance of chemical contaminants in a spacecraft cabin given specified generation rates. Removal rates are based on device flow rates specified by the user and calculated removal efficiencies based on cabin concentration and removal technology experimental data. Versions 1.0 through 8.0 are documented in NASA TM-108409. TM-108409 also contains a source file listing for version 8.0. Changes to version 8.0 are documented in this technical memorandum and a source file listing for the modified version, version 8.1, is provided. Detailed descriptions for the computer program subprograms are extracted from TM-108409 and modified as necessary to reflect version 8.1. Version 8.1 supersedes version 8.0. Information on a separate user's guide is available from the author.
A novel medical image data-based multi-physics simulation platform for computational life sciences.
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-04-06
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.
Numerical Simulation of Vitiation Effects on a Hydrogen-Fueled Dual-Mode Scramjet
NASA Technical Reports Server (NTRS)
Vyas, Manan A.; Engblom, William A.; Georgiadis, Nicholas J.; Trefny, Charles J.; Bhagwandin, Vishal A.
2010-01-01
The Wind-US computational fluid dynamics (CFD) flow solver was used to simulate dual-mode direct-connect ramjet/scramjet engine flowpath tests conducted in the University of Virginia (UVa) Supersonic Combustion Facility (SCF). The objective was to develop a computational capability within Wind-US to aid current hypersonic research and provide insight to flow as well as chemistry details that are not resolved by instruments available. Computational results are compared with experimental data to validate the accuracy of the numerical modeling. These results include two fuel-off non-reacting and eight fuel-on reacting cases with different equivalence ratios, split between one set with a clean (non-vitiated) air supply and the other set with a vitiated air supply (12 percent H2O vapor). The Peters and Rogg hydrogen-air chemical kinetics model was selected for the scramjet simulations. A limited sensitivity study was done to investigate the choice of turbulence model and inviscid flux scheme and led to the selection of the k-epsilon model and Harten, Lax and van Leer (for contact waves) (HLLC) scheme for general use. Simulation results show reasonably good agreement with experimental data and the overall vitiation effects were captured.
Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.
2003-01-01
Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques
Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise
NASA Astrophysics Data System (ADS)
Kocheemoolayil, Joseph; Lele, Sanjiva
2014-11-01
Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.
Evaluation of Airframe Noise Reduction Concepts via Simulations Using a Lattice Boltzmann Approach
NASA Technical Reports Server (NTRS)
Fares, Ehab; Casalino, Damiano; Khorrami, Mehdi R.
2015-01-01
Unsteady computations are presented for a high-fidelity, 18% scale, semi-span Gulfstream aircraft model in landing configuration, i.e. flap deflected at 39 degree and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW® to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. In addition to the baseline geometry, which was presented previously, various noise reduction concepts for the flap and main landing gear are simulated. In particular, care is taken to fully resolve the complex geometrical details associated with these concepts in order to capture the resulting intricate local flow field thus enabling accurate prediction of their acoustic behavior. To determine aeroacoustic performance, the farfield noise predicted with the concepts applied is compared to high-fidelity simulations of the untreated baseline configurations. To assess the accuracy of the computed results, the aerodynamic and aeroacoustic impact of the noise reduction concepts is evaluated numerically and compared to experimental results for the same model. The trends and effectiveness of the simulated noise reduction concepts compare well with measured values and demonstrate that the computational approach is capable of capturing the primary effects of the acoustic treatment on a full aircraft model.
Multi-phase models for water and thermal management of proton exchange membrane fuel cell: A review
NASA Astrophysics Data System (ADS)
Zhang, Guobin; Jiao, Kui
2018-07-01
The 3D (three-dimensional) multi-phase CFD (computational fluid dynamics) model is widely utilized in optimizing water and thermal management of PEM (proton exchange membrane) fuel cell. However, a satisfactory 3D multi-phase CFD model which is able to simulate the detailed gas and liquid two-phase flow in channels and reflect its effect on performance precisely is still not developed due to the coupling difficulties and computation amount. Meanwhile, the agglomerate model of CL (catalyst layer) should also be added in 3D CFD model so as to better reflect the concentration loss and optimize CL structure in macroscopic scale. Besides, the effect of thermal management is perhaps underestimated in current 3D multi-phase CFD simulations due to the lack of coolant channel in computation domain and constant temperature boundary condition. Therefore, the 3D CFD simulations in cell and stack levels with convection boundary condition are suggested to simulate the water and thermal management more accurately. Nevertheless, with the rapid development of PEM fuel cell, current 3D CFD simulations are far from practical demand, especially at high current density and low to zero humidity and for the novel designs developed recently, such as: metal foam flow field, 3D fine mesh flow field, anode circulation etc.
Mannava, Sandeep; Plate, Johannes F; Tuohy, Christopher J; Seyler, Thorsten M; Whitlock, Patrick W; Curl, Walton W; Smith, Thomas L; Saul, Katherine R
2013-07-01
The purpose of this article is to review basic science studies using various animal models for rotator cuff research and to describe structural, biomechanical, and functional changes to muscle following rotator cuff tears. The use of computational simulations to translate the findings from animal models to human scale is further detailed. A comprehensive review was performed of the basic science literature describing the use of animal models and simulation analysis to examine muscle function following rotator cuff injury and repair in the ageing population. The findings from various studies of rotator cuff pathology emphasize the importance of preventing permanent muscular changes with detrimental results. In vivo muscle function, electromyography, and passive muscle-tendon unit properties were studied before and after supraspinatus tenotomy in a rodent rotator cuff injury model (acute vs chronic). Then, a series of simulation experiments were conducted using a validated computational human musculoskeletal shoulder model to assess both passive and active tension of rotator cuff repairs based on surgical positioning. Outcomes of rotator cuff repair may be improved by earlier surgical intervention, with lower surgical repair tensions and fewer electromyographic neuromuscular changes. An integrated approach of animal experiments, computer simulation analyses, and clinical studies may allow us to gain a fundamental understanding of the underlying pathology and interpret the results for clinical translation.
On the applicability of density dependent effective interactions in cluster-forming systems
NASA Astrophysics Data System (ADS)
Montes-Saralegui, Marta; Kahl, Gerhard; Nikoubashman, Arash
2017-02-01
We systematically studied the validity and transferability of the force-matching algorithm for computing effective pair potentials in a system of dendritic polymers, i.e., a particular class of ultrasoft colloids. We focused on amphiphilic dendrimers, macromolecules which can aggregate into clusters of overlapping particles to minimize the contact area with the surrounding implicit solvent. Simulations were performed for both the monomeric and coarse-grained models in the liquid phase at densities ranging from infinite dilution up to values close to the freezing point. The effective pair potentials for the coarse-grained simulations were computed from the monomeric simulations both in the zero-density limit (Φeff0) and at each investigated finite density (Φeff). Conducting the coarse-grained simulations with Φeff0 at higher densities is not appropriate as they failed at reproducing the structural properties of the monomeric simulations. In contrast, we found excellent agreement between the spatial dendrimer distributions obtained from the coarse-grained simulations with Φeff and the microscopically detailed simulations at low densities, where the macromolecules were distributed homogeneously in the system. However, the reliability of the coarse-grained simulations deteriorated significantly as the density was increased further and the cluster occupation became more polydisperse. Under these conditions, the effective pair potential of the coarse-grained model can no longer be computed by averaging over the whole system, but the local density needs to be taken into account instead.
Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide
NASA Astrophysics Data System (ADS)
Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.
Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
NASA Astrophysics Data System (ADS)
Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.
2015-11-01
We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.
Progressive Fracture of Composite Structures
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
2001-01-01
This report includes the results of a research in which the COmposite Durability STRuctural ANalysis (CODSTRAN) computational simulation capabilities were augmented and applied to various structures for demonstration of the new features and verification. The first chapter of this report provides an introduction to the computational simulation or virtual laboratory approach for the assessment of damage and fracture progression characteristics in composite structures. The second chapter outlines the details of the overall methodology used, including the failure criteria and the incremental/iterative loading procedure with the definitions of damage, fracture, and equilibrium states. The subsequent chapters each contain an augmented feature of the code and/or demonstration examples. All but one of the presented examples contains laminated composite structures with various fiber/matrix constituents. For each structure simulated, damage initiation and progression mechanisms are identified and the structural damage tolerance is quantified at various degradation stages. Many chapters contain the simulation of defective and defect free structures to evaluate the effects of existing defects on structural durability.
The NASA Ames 16-Inch Shock Tunnel Nozzle Simulations and Experimental Comparison
NASA Technical Reports Server (NTRS)
TokarcikPolsky, S.; Papadopoulos, P.; Venkatapathy, E.; Delwert, G. S.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
The 16-Inch Shock Tunnel at NASA Ames Research Center is a unique test facility used for hypersonic propulsion testing. To provide information necessary to understand the hypersonic testing of the combustor model, computational simulations of the facility nozzle were performed and results are compared with available experimental data, namely static pressure along the nozzle walls and pitot pressure at the exit of the nozzle section. Both quasi-one-dimensional and axisymmetric approaches were used to study the numerous modeling issues involved. The facility nozzle flow was examined for three hypersonic test conditions, and the computational results are presented in detail. The effects of variations in reservoir conditions, boundary layer growth, and parameters of numerical modeling are explored.
An automatic frequency control loop using overlapping DFTs (Discrete Fourier Transforms)
NASA Technical Reports Server (NTRS)
Aguirre, S.
1988-01-01
An automatic frequency control (AFC) loop is introduced and analyzed in detail. The new scheme is a generalization of the well known Cross Product AFC loop that uses running overlapping discrete Fourier transforms (DFTs) to create a discriminator curve. Linear analysis is included and supported with computer simulations. The algorithm is tested in a low carrier to noise ratio (CNR) dynamic environment, and the probability of loss of lock is estimated via computer simulations. The algorithm discussed is a suboptimum tracking scheme with a larger frequency error variance compared to an optimum strategy, but offers simplicity of implementation and a very low operating threshold CNR. This technique can be applied during the carrier acquisition and re-acquisition process in the Advanced Receiver.
Cycle-averaged dynamics of a periodically driven, closed-loop circulation model
NASA Technical Reports Server (NTRS)
Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.
2005-01-01
Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.
Using virtualization to protect the proprietary material science applications in volunteer computing
NASA Astrophysics Data System (ADS)
Khrapov, Nikolay P.; Rozen, Valery V.; Samtsevich, Artem I.; Posypkin, Mikhail A.; Sukhomlin, Vladimir A.; Oganov, Artem R.
2018-04-01
USPEX is a world-leading software for computational material design. In essence, USPEX splits simulation into a large number of workunits that can be processed independently. This scheme ideally fits the desktop grid architecture. Workunit processing is done by a simulation package aimed at energy minimization. Many of such packages are proprietary and should be protected from unauthorized access when running on a volunteer PC. In this paper we present an original approach based on virtualization. In a nutshell, the proprietary code and input files are stored in an encrypted folder and run inside a virtual machine image that is also password protected. The paper describes this approach in detail and discusses its application in USPEX@home volunteer project.
Aerodynamic study of different cyclist positions: CFD analysis and full-scale wind-tunnel tests.
Defraeye, Thijs; Blocken, Bert; Koninckx, Erwin; Hespel, Peter; Carmeliet, Jan
2010-05-07
Three different cyclist positions were evaluated with Computational Fluid Dynamics (CFD) and wind-tunnel experiments were used to provide reliable data to evaluate the accuracy of the CFD simulations. Specific features of this study are: (1) both steady Reynolds-averaged Navier-Stokes (RANS) and unsteady flow modelling, with more advanced turbulence modelling techniques (Large-Eddy Simulation - LES), were evaluated; (2) the boundary layer on the cyclist's surface was resolved entirely with low-Reynolds number modelling, instead of modelling it with wall functions; (3) apart from drag measurements, also surface pressure measurements on the cyclist's body were performed in the wind-tunnel experiment, which provided the basis for a more detailed evaluation of the predicted flow field by CFD. The results show that the simulated and measured drag areas differed about 11% (RANS) and 7% (LES), which is considered to be a close agreement in CFD studies. A fair agreement with wind-tunnel data was obtained for the predicted surface pressures, especially with LES. Despite the higher accuracy of LES, its much higher computational cost could make RANS more attractive for practical use in some situations. CFD is found to be a valuable tool to evaluate the drag of different cyclist positions and to investigate the influence of small adjustments in the cyclist's position. A strong advantage of CFD is that detailed flow field information is obtained, which cannot easily be obtained from wind-tunnel tests. This detailed information allows more insight in the causes of the drag force and provides better guidance for position improvements. Copyright 2010 Elsevier Ltd. All rights reserved.
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, John E.; Sener, Melih; Vandivort, Kirby L.
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. In this paper, we present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. Finally, we describemore » the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
Simulation procedure for modeling transient water table and artesian stress and response
Reed, J.E.; Bedinger, M.S.; Terry, J.E.
1976-01-01
The series of computer programs described in this report were designed specifically to model the ground-water regime in sufficient detail to determine the effects of the imposition of various types of stress upon the system, and to display the results in a convenient manner during calibration and when presenting projected data. SUPERMOCK simulates the ground-water system and DATE and HYDROG aid in the display of computed data. During calibration, DATE is especially useful because it has the optional feature of comparing computed data with observed data. Although the programs can be run independently, experience dictates that for best results the three should be run as steps in the same job. English units of inches, feet, and days are used in each of the programs. The units for any parameters not given in the text are clearly specified in the instructions for input to the individual programs. (Woodard-USGS)
Equation-free multiscale computation: algorithms and applications.
Kevrekidis, Ioannis G; Samaey, Giovanni
2009-01-01
In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, John E.; Sener, Melih; Vandivort, Kirby L.
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that weremore » used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
Stone, John E.; Sener, Melih; Vandivort, Kirby L.; ...
2015-12-12
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. In this paper, we present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. Finally, we describemore » the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
A design framework for teleoperators with kinesthetic feedback
NASA Technical Reports Server (NTRS)
Hannaford, Blake
1989-01-01
The application of a hybrid two-port model to teleoperators with force and velocity sensing at the master and slave is presented. The interfaces between human operator and master, and between environment and slave, are ports through which the teleoperator is designed to exchange energy between the operator and the environment. By computing or measuring the input-output properties of this two-port network, the hybrid two-port model of an actual or simulated teleoperator system can be obtained. It is shown that the hybrid model (as opposed to other two-port forms) leads to an intuitive representation of ideal teleoperator performace and applies to several teleoperator architectures. Thus measured values of the h matrix or values computed from a simulation can be used to compare performance with th ideal. The frequency-dependent h matrix is computed from a detailed SPICE model of an actual system, and the method is applied to a proposed architecture.
Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs
Archibald, R.; Evans, K. J.; Salinger, A.
2015-06-01
The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less
A simulation model for wind energy storage systems. Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Warren, A. W.; Edsinger, R. W.; Chan, Y. K.
1977-01-01
A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers.
NASA Technical Reports Server (NTRS)
Williams, Jessica L.; Bhat, Ramachandra S.; You, Tung-Han
2012-01-01
The Soil Moisture Active Passive (SMAP) mission will perform soil moisture content and freeze/thaw state observations from a low-Earth orbit. The observatory is scheduled to launch in October 2014 and will perform observations from a near-polar, frozen, and sun-synchronous Science Orbit for a 3-year data collection mission. At launch, the observatory is delivered to an Injection Orbit that is biased below the Science Orbit; the spacecraft will maneuver to the Science Orbit during the mission Commissioning Phase. The delta V needed to maneuver from the Injection Orbit to the Science Orbit is computed statistically via a Monte Carlo simulation; the 99th percentile delta V (delta V99) is carried as a line item in the mission delta V budget. This paper details the simulation and analysis performed to compute this figure and the delta V99 computed per current mission parameters.
Cardiovascular system simulation in biomedical engineering education.
NASA Technical Reports Server (NTRS)
Rideout, V. C.
1972-01-01
Use of complex cardiovascular system models, in conjunction with a large hybrid computer, in biomedical engineering courses. A cardiovascular blood pressure-flow model, driving a compartment model for the study of dye transport, was set up on the computer for use as a laboratory exercise by students who did not have the computer experience or skill to be able to easily set up such a simulation involving some 27 differential equations running at 'real time' rate. The students were given detailed instructions regarding the model, and were then able to study effects such as those due to septal and valve defects upon the pressure, flow, and dye dilution curves. The success of this experiment in the use of involved models in engineering courses was such that it seems that this type of laboratory exercise might be considered for use in physiology courses as an adjunct to animal experiments.
NASA Technical Reports Server (NTRS)
Strutzenberg, L. L.; Dougherty, N. S.; Liever, P. A.; West, J. S.; Smith, S. D.
2007-01-01
This paper details advances being made in the development of Reynolds-Averaged Navier-Stokes numerical simulation tools, models, and methods for the integrated Space Shuttle Vehicle at launch. The conceptual model and modeling approach described includes the development of multiple computational models to appropriately analyze the potential debris transport for critical debris sources at Lift-Off. The conceptual model described herein involves the integration of propulsion analysis for the nozzle/plume flow with the overall 3D vehicle flowfield at Lift-Off. Debris Transport Analyses are being performed using the Shuttle Lift-Off models to assess the risk to the vehicle from Lift-Off debris and appropriately prioritized mitigation of potential debris sources to continue to reduce vehicle risk. These integrated simulations are being used to evaluate plume-induced debris environments where the multi-plume interactions with the launch facility can potentially accelerate debris particles toward the vehicle.
Use of Monte Carlo simulation for the interpretation and analysis of diffuse scattering
NASA Astrophysics Data System (ADS)
Welberry, T. R.; Chan, E. J.; Goossens, D. J.; Heerdegen, A. P.
2010-02-01
With the development of computer simulation methods there is, for the first time, the possibility of having a single general method that can be used for any diffuse scattering problem in any type of system. As computers get ever faster it is expected that current methods will become increasingly powerful and applicable to a wider and wider range of problems and materials and provide results in increasingly fine detail. In this article we discuss two contrasting recent examples. The first is concerned with the two polymorphic forms of the pharmaceutical compound benzocaine. The strong and highly structured diffuse scattering in these is shown to be symptomatic of the presence of highly correlated molecular motions. The second concerns Ag+ fast ion conduction in the pearceite/polybasite family of mineral solid electrolytes. Here Monte-Carlo simulation is used to model the diffuse scattering and gain insight into how the ionic conduction arises.
An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2005-01-01
An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.
Dislocation dynamics in non-convex domains using finite elements with embedded discontinuities
NASA Astrophysics Data System (ADS)
Romero, Ignacio; Segurado, Javier; LLorca, Javier
2008-04-01
The standard strategy developed by Van der Giessen and Needleman (1995 Modelling Simul. Mater. Sci. Eng. 3 689) to simulate dislocation dynamics in two-dimensional finite domains was modified to account for the effect of dislocations leaving the crystal through a free surface in the case of arbitrary non-convex domains. The new approach incorporates the displacement jumps across the slip segments of the dislocations that have exited the crystal within the finite element analysis carried out to compute the image stresses on the dislocations due to the finite boundaries. This is done in a simple computationally efficient way by embedding the discontinuities in the finite element solution, a strategy often used in the numerical simulation of crack propagation in solids. Two academic examples are presented to validate and demonstrate the extended model and its implementation within a finite element program is detailed in the appendix.
Simulation of rockfalls triggered by earthquakes
Kobayashi, Y.; Harp, E.L.; Kagawa, T.
1990-01-01
A computer program to simulate the downslope movement of boulders in rolling or bouncing modes has been developed and applied to actual rockfalls triggered by the Mammoth Lakes, California, earthquake sequence in 1980 and the Central Idaho earthquake in 1983. In order to reproduce a movement mode where bouncing predominated, we introduced an artificial unevenness to the slope surface by adding a small random number to the interpolated value of the mid-points between the adjacent surveyed points. Three hundred simulations were computed for each site by changing the random number series, which determined distances and bouncing intervals. The movement of the boulders was, in general, rather erratic depending on the random numbers employed, and the results could not be seen as deterministic but stochastic. The closest agreement between calculated and actual movements was obtained at the site with the most detailed and accurate topographic measurements. ?? 1990 Springer-Verlag.
CG2AA: backmapping protein coarse-grained structures.
Lombardi, Leandro E; Martí, Marcelo A; Capece, Luciana
2016-04-15
Coarse grain (CG) models allow long-scale simulations with a much lower computational cost than that of all-atom simulations. However, the absence of atomistic detail impedes the analysis of specific atomic interactions that are determinant in most interesting biomolecular processes. In order to study these phenomena, it is necessary to reconstruct the atomistic structure from the CG representation. This structure can be analyzed by itself or be used as an onset for atomistic molecular dynamics simulations. In this work, we present a computer program that accurately reconstructs the atomistic structure from a CG model for proteins, using a simple geometrical algorithm. The software is free and available online at http://www.ic.fcen.uba.ar/cg2aa/cg2aa.py Supplementary data are available at Bioinformatics online. lula@qi.fcen.uba.ar. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Unsteady numerical simulations of the stability and dynamics of flames
NASA Technical Reports Server (NTRS)
Kailasanath, K.; Patnaik, G.; Oran, E. S.
1995-01-01
In this report we describe the research performed at the Naval Research Laboratory in support of the NASA Microgravity Science and Applications Program over the past three years (from Feb. 1992) with emphasis on the work performed since the last microgravity combustion workshop. The primary objective of our research is to develop an understanding of the differences in the structure, stability, dynamics and extinction of flames in earth gravity and in microgravity environments. Numerical simulations, in which the various physical and chemical processes can be independently controlled, can significantly advance our understanding of these differences. Therefore, our approach is to use detailed time-dependent, multi-dimensional, multispecies numerical models to perform carefully designed computational experiments. The basic issues we have addressed, a general description of the numerical approach, and a summary of the results are described in this report. More detailed discussions are available in the papers published which are referenced herein. Some of the basic issues we have addressed recently are (1) the relative importance of wall losses and gravity on the extinguishment of downward-propagating flames; (2) the role of hydrodynamic instabilities in the formation of cellular flames; (3) effects of gravity on burner-stabilized flames, and (4) effects of radiative losses and chemical-kinetics on flames near flammability limits. We have also expanded our efforts to include hydrocarbon flames in addition to hydrogen flames and to perform simulations in support of other on-going efforts in the microgravity combustion sciences program. Modeling hydrocarbon flames typically involves a larger number of species and a much larger number of reactions when compared to hydrogen. In addition, more complex radiation models may also be needed. In order to efficiently compute such complex flames recent developments in parallel computing have been utilized to develop a state-of-the-art parallel flame code. This is discussed below in some detail after a brief discussion of the numerical models.
Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Brian Keith; Rivera, Michael K.; Boero, Riccardo
2014-04-01
This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences inmore » the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.« less
NASA Astrophysics Data System (ADS)
Ervik, Åsmund; Serratos, Guadalupe Jiménez; Müller, Erich A.
2017-03-01
We describe here raaSAFT, a Python code that enables the setup and running of coarse-grained molecular dynamics simulations in a systematic and efficient manner. The code is built on top of the popular HOOMD-blue code, and as such harnesses the computational power of GPUs. The methodology makes use of the SAFT- γ Mie force field, so the resulting coarse grained pair potentials are both closely linked to and consistent with the macroscopic thermodynamic properties of the simulated fluid. In raaSAFT both homonuclear and heteronuclear models are implemented for a wide range of compounds spanning from linear alkanes, to more complicated fluids such as water and alcohols, all the way up to nonionic surfactants and models of asphaltenes and resins. Adding new compounds as well as new features is made straightforward by the modularity of the code. To demonstrate the ease-of-use of raaSAFT, we give a detailed walkthrough of how to simulate liquid-liquid equilibrium of a hydrocarbon with water. We describe in detail how both homonuclear and heteronuclear compounds are implemented. To demonstrate the performance and versatility of raaSAFT, we simulate a large polymer-solvent mixture with 300 polystyrene molecules dissolved in 42 700 molecules of heptane, reproducing the experimentally observed temperature-dependent solubility of polystyrene. For this case we obtain a speedup of more than three orders of magnitude as compared to atomistically-detailed simulations.
NASA Astrophysics Data System (ADS)
Tucker, Laura Jane
Under the harsh conditions of limited nutrient and hard growth surface, Paenibacillus dendritiformis in agar plates form two classes of patterns (morphotypes). The first class, called the dendritic morphotype, has radially directed branches. The second class, called the chiral morphotype, exhibits uniform handedness. The dendritic morphotype has been modeled successfully using a continuum model on a regular lattice; however, a suitable computational approach was not known to solve a continuum chiral model. This work details a new computational approach to solving the chiral continuum model of pattern formation in P. dendritiformis. The approach utilizes a random computational lattice and new methods for calculating certain derivative terms found in the model.
NASA Astrophysics Data System (ADS)
Lele, Sanjiva K.
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.
NASA Astrophysics Data System (ADS)
Ethier, Stephane; Lin, Zhihong
2001-10-01
Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)
Experimental Study and CFD Simulation of a 2D Circulating Fluidized Bed
NASA Astrophysics Data System (ADS)
Kallio, S.; Guldén, M.; Hermanson, A.
Computational fluid dynamics (CFD) gains popularity in fluidized bed modeling. For model validation, there is a need of detailed measurements under well-defined conditions. In the present study, experiments were carried out in a 40 em wide and 3 m high 2D circulating fluidized bed. Two experiments were simulated by means of the Eulerian multiphase models of the Fluent CFD software. The vertical pressure and solids volume fraction profiles and the solids circulation rate obtained from the simulation were compared to the experimental results. In addition, lateral volume fraction profiles could be compared. The simulated CFB flow patterns and the profiles obtained from simulations were in general in a good agreement with the experimental results.
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...
2017-01-01
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.
1991-01-01
This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.
Mars Aerocapture Systems Study
NASA Technical Reports Server (NTRS)
Wright, Henry S.; Oh, David Y.; Westhelle, Carlos H.; Fisher, Jody L.; Dyke, R. Eric; Edquist, Karl T.; Brown, James L.; Justh, Hilary L.; Munk, Michelle M.
2006-01-01
Mars Aerocapture Systems Study (MASS) is a detailed study of the application of aerocapture to a large Mars robotic orbiter to assess and identify key technology gaps. This study addressed use of an Opposition class return segment for use in the Mars Sample Return architecture. Study addressed mission architecture issues as well as system design. Key trade studies focused on design of aerocapture aeroshell, spacecraft design and packaging, guidance, navigation and control with simulation, computational fluid dynamics, and thermal protection system sizing. Detailed master equipment lists are included as well as a cursory cost assessment.
Integrated approach for stress analysis of high performance diesel engine cylinder head
NASA Astrophysics Data System (ADS)
Chainov, N. D.; Myagkov, L. L.; Malastowski, N. S.; Blinov, A. S.
2018-03-01
Growing thermal and mechanical loads due to development of engines with high level of a mean effective pressure determine requirements to cylinder head durability. In this paper, computational schemes for thermal and mechanical stress analysis of a high performance diesel engine cylinder head were described. The most important aspects in this approach are the account of temperature fields of conjugated details (valves and saddles), heat transfer modeling in a cooling jacket of a cylinder head and topology optimization of the detail force scheme. Simulation results are shown and analyzed.
Extending a Flight Management Computer for Simulation and Flight Experiments
NASA Technical Reports Server (NTRS)
Madden, Michael M.; Sugden, Paul C.
2005-01-01
In modern transport aircraft, the flight management computer (FMC) has evolved from a flight planning aid to an important hub for pilot information and origin-to-destination optimization of flight performance. Current trends indicate increasing roles of the FMC in aviation safety, aviation security, increasing airport capacity, and improving environmental impact from aircraft. Related research conducted at the Langley Research Center (LaRC) often requires functional extension of a modern, full-featured FMC. Ideally, transport simulations would include an FMC simulation that could be tailored and extended for experiments. However, due to the complexity of a modern FMC, a large investment (millions of dollars over several years) and scarce domain knowledge are needed to create such a simulation for transport aircraft. As an intermediate alternative, the Flight Research Services Directorate (FRSD) at LaRC created a set of reusable software products to extend flight management functionality upstream of a Boeing-757 FMC, transparently simulating or sharing its operator interfaces. The paper details the design of these products and highlights their use on NASA projects.
Smeal, Steven W; Schmitt, Margaret A; Pereira, Ronnie Rodrigues; Prasad, Ashok; Fisk, John D
2017-01-01
To expand the quantitative, systems level understanding and foster the expansion of the biotechnological applications of the filamentous bacteriophage M13, we have unified the accumulated quantitative information on M13 biology into a genetically-structured, experimentally-based computational simulation of the entire phage life cycle. The deterministic chemical kinetic simulation explicitly includes the molecular details of DNA replication, mRNA transcription, protein translation and particle assembly, as well as the competing protein-protein and protein-nucleic acid interactions that control the timing and extent of phage production. The simulation reproduces the holistic behavior of M13, closely matching experimentally reported values of the intracellular levels of phage species and the timing of events in the M13 life cycle. The computational model provides a quantitative description of phage biology, highlights gaps in the present understanding of M13, and offers a framework for exploring alternative mechanisms of regulation in the context of the complete M13 life cycle. Copyright © 2016 Elsevier Inc. All rights reserved.
Sultanov, Renat A; Guster, Dennis
2009-01-01
We report computational results of blood flow through a model of the human aortic arch and a vessel of actual diameter and length. A realistic pulsatile flow is used in all simulations. Calculations for bifurcation type vessels are also carried out and presented. Different mathematical methods for numerical solution of the fluid dynamics equations have been considered. The non-Newtonian behaviour of the human blood is investigated together with turbulence effects. A detailed time-dependent mathematical convergence test has been carried out. The results of computer simulations of the blood flow in vessels of three different geometries are presented: for pressure, strain rate and velocity component distributions we found significant disagreements between our results obtained with realistic non-Newtonian treatment of human blood and the widely used method in the literature: a simple Newtonian approximation. A significant increase of the strain rate and, as a result, the wall shear stress distribution, is found in the region of the aortic arch. Turbulent effects are found to be important, particularly in the case of bifurcation vessels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y., E-mail: zimu@uchicago.edu, E-mail: gnedin@fnal.gov
We compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting ultraviolet (UV) and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are not fully sufficient.more » While the discrepancies with the exiting data are marginal, the future James Webb Space Telescope (JWST) data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
High-Fidelity Dynamic Modeling of Spacecraft in the Continuum--Rarefied Transition Regime
NASA Astrophysics Data System (ADS)
Turansky, Craig P.
The state of the art of spacecraft rarefied aerodynamics seldom accounts for detailed rigid-body dynamics. In part because of computational constraints, simpler models based upon the ballistic and drag coefficients are employed. Of particular interest is the continuum-rarefied transition regime of Earth's thermosphere where gas dynamic simulation is difficult yet wherein many spacecraft operate. The feasibility of increasing the fidelity of modeling spacecraft dynamics is explored by coupling rarefied aerodynamics with rigid-body dynamics modeling similar to that traditionally used for aircraft in atmospheric flight. Presented is a framework of analysis and guiding principles which capitalize on the availability of increasing computational methods and resources. Aerodynamic force inputs for modeling spacecraft in two dimensions in a rarefied flow are provided by analytical equations in the free-molecular regime, and the direct simulation Monte Carlo method in the transition regime. The application of the direct simulation Monte Carlo method to this class of problems is examined in detail with a new code specifically designed for engineering-level rarefied aerodynamic analysis. Time-accurate simulations of two distinct geometries in low thermospheric flight and atmospheric entry are performed, demonstrating non-linear dynamics that cannot be predicted using simpler approaches. The results of this straightforward approach to the aero-orbital coupled-field problem highlight the possibilities for future improvements in drag prediction, control system design, and atmospheric science. Furthermore, a number of challenges for future work are identified in the hope of stimulating the development of a new subfield of spacecraft dynamics.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Impact of detector simulation in particle physics collider experiments
NASA Astrophysics Data System (ADS)
Daniel Elvira, V.
2017-06-01
Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.
Fast Multipole Methods for Three-Dimensional N-body Problems
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.
1995-01-01
We are developing computational tools for the simulations of three-dimensional flows past bodies undergoing arbitrary motions. High resolution viscous vortex methods have been developed that allow for extended simulations of two-dimensional configurations such as vortex generators. Our objective is to extend this methodology to three dimensions and develop a robust computational scheme for the simulation of such flows. A fundamental issue in the use of vortex methods is the ability of employing efficiently large numbers of computational elements to resolve the large range of scales that exist in complex flows. The traditional cost of the method scales as Omicron (N(sup 2)) as the N computational elements/particles induce velocities at each other, making the method unacceptable for simulations involving more than a few tens of thousands of particles. In the last decade fast methods have been developed that have operation counts of Omicron (N log N) or Omicron (N) (referred to as BH and GR respectively) depending on the details of the algorithm. These methods are based on the observation that the effect of a cluster of particles at a certain distance may be approximated by a finite series expansion. In order to exploit this observation we need to decompose the element population spatially into clusters of particles and build a hierarchy of clusters (a tree data structure) - smaller neighboring clusters combine to form a cluster of the next size up in the hierarchy and so on. This hierarchy of clusters allows one to determine efficiently when the approximation is valid. This algorithm is an N-body solver that appears in many fields of engineering and science. Some examples of its diverse use are in astrophysics, molecular dynamics, micro-magnetics, boundary element simulations of electromagnetic problems, and computer animation. More recently these N-body solvers have been implemented and applied in simulations involving vortex methods. Koumoutsakos and Leonard (1995) implemented the GR scheme in two dimensions for vector computer architectures allowing for simulations of bluff body flows using millions of particles. Winckelmans presented three-dimensional, viscous simulations of interacting vortex rings, using vortons and an implementation of a BH scheme for parallel computer architectures. Bhatt presented a vortex filament method to perform inviscid vortex ring interactions, with an alternative implementation of a BH scheme for a Connection Machine parallel computer architecture.
Loads calibrations of strain gage bridges on the DAST project Aeroelastic Research Wing (ARW-1)
NASA Technical Reports Server (NTRS)
Eckstrom, C. V.
1980-01-01
The details of and results from the procedure used to calibrate strain gage bridges for measurement of wing structural loads for the DAST project ARW-1 wing are presented. Results are in the form of loads equations and comparison of computed loads vs. actual loads for two simulated flight loading conditions.
Center of Excellence for Hypersonics Research
2012-01-25
detailed simulations of actual combustor configurations, and ultimately for the optimization of hypersonic air - breathing propulsion system flow paths... vehicle development programs. The Center engaged leading experts in experimental and computational analysis of hypersonic flows to provide research...advanced hypersonic vehicles and space access systems will require significant advances in the design methods and ground testing techniques to ensure
Modeling the October 2005 lahars at Panabaj (Guatemala)
NASA Astrophysics Data System (ADS)
Charbonnier, S. J.; Connor, C. B.; Connor, L. J.; Sheridan, M. F.; Oliva Hernández, J. P.; Richardson, J. A.
2018-01-01
An extreme rainfall event in October of 2005 triggered two deadly lahars on the flanks of Tolimán volcano (Guatemala) that caused many fatalities in the village of Panabaj. We mapped the deposits of these lahars, then developed computer simulations of the lahars using the geologic data and compared simulated area inundated by the flows to mapped area inundated. Computer simulation of the two lahars was dramatically improved after calibration with geological data. Specifically, detailed field measurements of flow inundation area, flow thickness, flow direction, and velocity estimates, collected after lahar emplacement, were used to calibrate the rheological input parameters for the models, including deposit volume, yield strength, sediment and water concentrations, and Manning roughness coefficients. Simulations of the two lahars, with volumes of 240,200 ± 55,400 and 126,000 ± 29,000 m3, using the FLO-2D computer program produced models of lahar runout within 3% of measured runouts and produced reasonable estimates of flow thickness and velocity along the lengths of the simulated flows. We compare areas inundated using the Jaccard fit, model sensitivity, and model precision metrics, all related to Bayes' theorem. These metrics show that false negatives (areas inundated by the observed lahar where not simulated) and false positives (areas not inundated by the observed lahar where inundation was simulated) are reduced using a model calibrated by rheology. The metrics offer a procedure for tuning model performance that will enhance model accuracy and make numerical models a more robust tool for natural hazard reduction.
Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches
NASA Astrophysics Data System (ADS)
Duchaineau, Mark
2001-06-01
Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.
Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.
1994-01-01
This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.
Neural dynamics in reconfigurable silicon.
Basu, A; Ramakrishnan, S; Petre, C; Koziol, S; Brink, S; Hasler, P E
2010-10-01
A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm(2).
Space Station communications and tracking systems modeling and RF link simulation
NASA Technical Reports Server (NTRS)
Tsang, Chit-Sang; Chie, Chak M.; Lindsey, William C.
1986-01-01
In this final report, the effort spent on Space Station Communications and Tracking System Modeling and RF Link Simulation is described in detail. The effort is mainly divided into three parts: frequency division multiple access (FDMA) system simulation modeling and software implementation; a study on design and evaluation of a functional computerized RF link simulation/analysis system for Space Station; and a study on design and evaluation of simulation system architecture. This report documents the results of these studies. In addition, a separate User's Manual on Space Communications Simulation System (SCSS) (Version 1) documents the software developed for the Space Station FDMA communications system simulation. The final report, SCSS user's manual, and the software located in the NASA JSC system analysis division's VAX 750 computer together serve as the deliverables from LinCom for this project effort.
Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena
2018-01-01
The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.
Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena
2018-01-01
The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results. PMID:29765315
NASA Technical Reports Server (NTRS)
Van Dalsem, W. R.; Steger, J. L.
1985-01-01
A simple and computationally efficient algorithm for solving the unsteady three-dimensional boundary-layer equations in the time-accurate or relaxation mode is presented. Results of the new algorithm are shown to be in quantitative agreement with detailed experimental data for flow over a swept infinite wing. The separated flow over a 6:1 ellipsoid at angle of attack, and the transonic flow over a finite-wing with shock-induced 'mushroom' separation are also computed and compared with available experimental data. It is concluded that complex, separated, three-dimensional viscous layers can be economically and routinely computed using a time-relaxation boundary-layer algorithm.
Elliptical orbit performance computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program which generates and plots elliptical orbit performance capability of space boosters for presentation purposes is described. Orbital performance capability of space boosters is typically presented as payload weight as a function of perigee and apogee altitudes. The parameters are derived from a parametric computer simulation of the booster flight which yields the payload weight as a function of velocity and altitude at insertion. The process of converting from velocity and altitude to apogee and perigee altitude and plotting the results as a function of payload weight is mechanized with the ELOPE program. The program theory, user instruction, input/output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
Computations of Combustion-Powered Actuation for Dynamic Stall Suppression
NASA Technical Reports Server (NTRS)
Jee, Solkeun; Bowles, Patrick O.; Matalanis, Claude G.; Min, Byung-Young; Wake, Brian E.; Crittenden, Tom; Glezer, Ari
2016-01-01
A computational framework for the simulation of dynamic stall suppression with combustion-powered actuation (COMPACT) is validated against wind tunnel experimental results on a VR-12 airfoil. COMPACT slots are located at 10% chord from the leading edge of the airfoil and directed tangentially along the suction-side surface. Helicopter rotor-relevant flow conditions are used in the study. A computationally efficient two-dimensional approach, based on unsteady Reynolds-averaged Navier-Stokes (RANS), is compared in detail against the baseline and the modified airfoil with COMPACT, using aerodynamic forces, pressure profiles, and flow-field data. The two-dimensional RANS approach predicts baseline static and dynamic stall very well. Most of the differences between the computational and experimental results are within two standard deviations of the experimental data. The current framework demonstrates an ability to predict COMPACT efficacy across the experimental dataset. Enhanced aerodynamic lift on the downstroke of the pitching cycle due to COMPACT is well predicted, and the cycleaveraged lift enhancement computed is within 3% of the test data. Differences with experimental data are discussed with a focus on three-dimensional features not included in the simulations and the limited computational model for COMPACT.
An Overview of Computational Aeroacoustic Modeling at NASA Langley
NASA Technical Reports Server (NTRS)
Lockard, David P.
2001-01-01
The use of computational techniques in the area of acoustics is known as computational aeroacoustics and has shown great promise in recent years. Although an ultimate goal is to use computational simulations as a virtual wind tunnel, the problem is so complex that blind applications of traditional algorithms are typically unable to produce acceptable results. The phenomena of interest are inherently unsteady and cover a wide range of frequencies and amplitudes. Nonetheless, with appropriate simplifications and special care to resolve specific phenomena, currently available methods can be used to solve important acoustic problems. These simulations can be used to complement experiments, and often give much more detailed information than can be obtained in a wind tunnel. The use of acoustic analogy methods to inexpensively determine far-field acoustics from near-field unsteadiness has greatly reduced the computational requirements. A few examples of current applications of computational aeroacoustics at NASA Langley are given. There remains a large class of problems that require more accurate and efficient methods. Research to develop more advanced methods that are able to handle the geometric complexity of realistic problems using block-structured and unstructured grids are highlighted.
NASA Astrophysics Data System (ADS)
Hunter, Kendall; Zhang, Yanhang; Lanning, Craig
2005-11-01
Insight into the progression of pulmonary hypertension may be obtained from thorough study of vascular flow during reactivity testing, an invasive diagnostic procedure which can dramatically alter vascular hemodynamics. Diagnostic imaging methods, however, are limited in their ability to provide extensive data. Here we present detailed flow and wall deformation results from simulations of pulmonary arteries undergoing this procedure. Patient-specific 3-D geometric reconstructions of the first four branches of the pulmonary vasculature were obtained clinically and meshed for use with computational software. Transient simulations in normal and reactive states were obtained from four such models were completed with patient-specific velocity inlet conditions and flow impedance exit conditions. A microstructurally based orthotropic hyperelastic model that simulates pulmonary artery mechanics under normotensive and hypoxic hypertensive conditions treated wall constitutive changes due to pressure reactivity and arterial remodeling. Pressure gradients, velocity fields, arterial deformation, and complete topography of shear stress were obtained. These models provide richer detail of hemodynamics than can be obtained from current imaging techniques, and should allow maximum characterization of vascular function in the clinical situation.
NASA Technical Reports Server (NTRS)
1975-01-01
Flow charts and display formats for the simulation of five experiments are given. The experiments are: (1) electromagnetic wave transmission; (2) passive observations of ambient plasma; (3) ionospheric measurements with subsatellite; (4) electron accelerator beam measurements; and (5) measurement of acoustical gravity waves in the sodium layer using lasers. A detailed explanation of the simulation procedure, definition of variables, and an explanation of how the experimenter makes display choices is also presented. A functional description is included on each flow chart and the assumptions and definitions of terms and scope of the flow charts and displays are presented.
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...
2016-09-18
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Kaukler, William F.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Modeling approach to simulate both mesoscale and microscopic forces acting in a typical AFM experiment is presented. At mesoscale level interaction between the cantilever tip and the sample surface is primarily described by the balance of attractive Van der Waals and repulsive forces. The model of cantilever oscillations is applicable to both non-contact and "tapping" AFM. This model can be farther enhanced to describe nanoparticle manipulation by cantilever. At microscopic level tip contamination and details of tip-surface interaction can be simulated using molecular dynamics approach. Integration of mesoscale model with molecular dynamic model is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
A modern space simulation facility to accommodate high production acceptance testing
NASA Technical Reports Server (NTRS)
Glover, J. D.
1986-01-01
A space simulation laboratory that supports acceptance testing of spacecraft and associated subsystems at throughput rates as high as nine per year is discussed. The laboratory includes a computer operated 27' by 30' space simulation, a 20' by 20' by 20' thermal cycle chamber and an eight station thermal cycle/thermal vacuum test system. The design philosophy and unique features of each system are discussed. The development of operating procedures, test team requirements, test team integration, and other peripheral activation details are described. A discussion of special accommodations for the efficient utilization of the systems in support of high rate production is presented.
Ion Move Brownian Dynamics (IMBD)--simulations of ion transport.
Kurczynska, Monika; Kotulska, Malgorzata
2014-01-01
Comparison of the computed characteristics and physiological measurement of ion transport through transmembrane proteins could be a useful method to assess the quality of protein structures. Simulations of ion transport should be detailed but also timeefficient. The most accurate method could be Molecular Dynamics (MD), which is very time-consuming, hence is not used for this purpose. The model which includes ion-ion interactions and reduces the simulation time by excluding water, protein and lipid molecules is Brownian Dynamics (BD). In this paper a new computer program for BD simulation of the ion transport is presented. We evaluate two methods for calculating the pore accessibility (round and irregular shape) and two representations of ion sizes (van der Waals diameter and one voxel). Ion Move Brownian Dynamics (IMBD) was tested with two nanopores: alpha-hemolysin and potassium channel KcsA. In both cases during the simulation an ion passed through the pore in less than 32 ns. Although two types of ions were in solution (potassium and chloride), only ions which agreed with the selectivity properties of the channels passed through the pores. IMBD is a new tool for the ion transport modelling, which can be used in the simulations of wide and narrow pores.
Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system
NASA Astrophysics Data System (ADS)
Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong
2011-06-01
Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.
Simplified energy-balance model for pragmatic multi-dimensional device simulation
NASA Astrophysics Data System (ADS)
Chang, Duckhyun; Fossum, Jerry G.
1997-11-01
To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.